Oct 2 19:31:10.866248 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:31:10.866268 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:31:10.866278 kernel: BIOS-provided physical RAM map: Oct 2 19:31:10.866284 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 2 19:31:10.866289 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 2 19:31:10.866294 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 2 19:31:10.866301 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 2 19:31:10.866307 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 2 19:31:10.866313 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 2 19:31:10.866320 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 2 19:31:10.866326 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 2 19:31:10.866331 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 2 19:31:10.866337 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 2 19:31:10.866343 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 2 19:31:10.866350 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 2 19:31:10.866357 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 2 19:31:10.866363 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 2 19:31:10.866369 kernel: NX (Execute Disable) protection: active Oct 2 19:31:10.866375 kernel: e820: update [mem 0x9b3f9018-0x9b402c57] usable ==> usable Oct 2 19:31:10.866381 kernel: e820: update [mem 0x9b3f9018-0x9b402c57] usable ==> usable Oct 2 19:31:10.866387 kernel: e820: update [mem 0x9b1ac018-0x9b1e8e57] usable ==> usable Oct 2 19:31:10.866392 kernel: e820: update [mem 0x9b1ac018-0x9b1e8e57] usable ==> usable Oct 2 19:31:10.866398 kernel: extended physical RAM map: Oct 2 19:31:10.866404 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 2 19:31:10.866410 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 2 19:31:10.866418 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 2 19:31:10.866424 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 2 19:31:10.866430 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 2 19:31:10.866436 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 2 19:31:10.866441 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 2 19:31:10.866447 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1ac017] usable Oct 2 19:31:10.866453 kernel: reserve setup_data: [mem 0x000000009b1ac018-0x000000009b1e8e57] usable Oct 2 19:31:10.866459 kernel: reserve setup_data: [mem 0x000000009b1e8e58-0x000000009b3f9017] usable Oct 2 19:31:10.866465 kernel: reserve setup_data: [mem 0x000000009b3f9018-0x000000009b402c57] usable Oct 2 19:31:10.866471 kernel: reserve setup_data: [mem 0x000000009b402c58-0x000000009c8eefff] usable Oct 2 19:31:10.866477 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 2 19:31:10.866484 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 2 19:31:10.866490 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 2 19:31:10.866496 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 2 19:31:10.866513 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 2 19:31:10.866522 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 2 19:31:10.866529 kernel: efi: EFI v2.70 by EDK II Oct 2 19:31:10.866535 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Oct 2 19:31:10.866543 kernel: random: crng init done Oct 2 19:31:10.866549 kernel: SMBIOS 2.8 present. Oct 2 19:31:10.866556 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Oct 2 19:31:10.866562 kernel: Hypervisor detected: KVM Oct 2 19:31:10.866569 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:31:10.866575 kernel: kvm-clock: cpu 0, msr 60f8a001, primary cpu clock Oct 2 19:31:10.866581 kernel: kvm-clock: using sched offset of 4566557837 cycles Oct 2 19:31:10.866588 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:31:10.866595 kernel: tsc: Detected 2794.748 MHz processor Oct 2 19:31:10.866604 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:31:10.866610 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:31:10.866617 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 2 19:31:10.866624 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:31:10.866630 kernel: Using GB pages for direct mapping Oct 2 19:31:10.866637 kernel: Secure boot disabled Oct 2 19:31:10.866643 kernel: ACPI: Early table checksum verification disabled Oct 2 19:31:10.866650 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 2 19:31:10.866657 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:31:10.866665 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:31:10.866671 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:31:10.866678 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 2 19:31:10.866684 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:31:10.866691 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:31:10.866703 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:31:10.866709 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 2 19:31:10.866716 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Oct 2 19:31:10.866722 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Oct 2 19:31:10.866730 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 2 19:31:10.866737 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Oct 2 19:31:10.866743 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Oct 2 19:31:10.866750 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Oct 2 19:31:10.866757 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Oct 2 19:31:10.866763 kernel: No NUMA configuration found Oct 2 19:31:10.866770 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 2 19:31:10.866776 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 2 19:31:10.866783 kernel: Zone ranges: Oct 2 19:31:10.866790 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:31:10.866797 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 2 19:31:10.866803 kernel: Normal empty Oct 2 19:31:10.866810 kernel: Movable zone start for each node Oct 2 19:31:10.866816 kernel: Early memory node ranges Oct 2 19:31:10.866823 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 2 19:31:10.866830 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 2 19:31:10.866836 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 2 19:31:10.866843 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 2 19:31:10.866851 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 2 19:31:10.866858 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 2 19:31:10.866864 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 2 19:31:10.866871 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:31:10.866877 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 2 19:31:10.866884 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 2 19:31:10.866890 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:31:10.866897 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 2 19:31:10.866904 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 2 19:31:10.866912 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 2 19:31:10.866918 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 2 19:31:10.866925 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:31:10.866931 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:31:10.866938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:31:10.866945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:31:10.866951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:31:10.866958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:31:10.866964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:31:10.866972 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:31:10.866979 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:31:10.866985 kernel: TSC deadline timer available Oct 2 19:31:10.866992 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 2 19:31:10.866998 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 2 19:31:10.867005 kernel: kvm-guest: setup PV sched yield Oct 2 19:31:10.867011 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Oct 2 19:31:10.867018 kernel: Booting paravirtualized kernel on KVM Oct 2 19:31:10.867024 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:31:10.867031 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 2 19:31:10.867039 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Oct 2 19:31:10.867046 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Oct 2 19:31:10.867058 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 2 19:31:10.867066 kernel: kvm-guest: setup async PF for cpu 0 Oct 2 19:31:10.867073 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Oct 2 19:31:10.867079 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:31:10.867086 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:31:10.867093 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 2 19:31:10.867100 kernel: Policy zone: DMA32 Oct 2 19:31:10.867108 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:31:10.867115 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:31:10.867123 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:31:10.867130 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:31:10.867137 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:31:10.867145 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 166304K reserved, 0K cma-reserved) Oct 2 19:31:10.867153 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:31:10.867160 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:31:10.867167 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:31:10.867173 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:31:10.867181 kernel: rcu: RCU event tracing is enabled. Oct 2 19:31:10.867188 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:31:10.867195 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:31:10.867202 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:31:10.867209 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:31:10.867217 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:31:10.867224 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 2 19:31:10.867231 kernel: Console: colour dummy device 80x25 Oct 2 19:31:10.867238 kernel: printk: console [ttyS0] enabled Oct 2 19:31:10.867245 kernel: ACPI: Core revision 20210730 Oct 2 19:31:10.867252 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 2 19:31:10.867259 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:31:10.867265 kernel: x2apic enabled Oct 2 19:31:10.867272 kernel: Switched APIC routing to physical x2apic. Oct 2 19:31:10.867279 kernel: kvm-guest: setup PV IPIs Oct 2 19:31:10.867287 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:31:10.867294 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:31:10.867301 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 2 19:31:10.867308 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 2 19:31:10.867315 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 2 19:31:10.867322 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 2 19:31:10.867329 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:31:10.867335 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:31:10.867344 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:31:10.867351 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:31:10.867358 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 2 19:31:10.867364 kernel: RETBleed: Mitigation: untrained return thunk Oct 2 19:31:10.867371 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:31:10.867379 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:31:10.867408 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:31:10.867424 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:31:10.867432 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:31:10.867441 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:31:10.867448 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:31:10.867456 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:31:10.867462 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:31:10.867469 kernel: LSM: Security Framework initializing Oct 2 19:31:10.867476 kernel: SELinux: Initializing. Oct 2 19:31:10.867483 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:31:10.867490 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:31:10.867497 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 2 19:31:10.867523 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 2 19:31:10.867530 kernel: ... version: 0 Oct 2 19:31:10.867537 kernel: ... bit width: 48 Oct 2 19:31:10.867543 kernel: ... generic registers: 6 Oct 2 19:31:10.867550 kernel: ... value mask: 0000ffffffffffff Oct 2 19:31:10.867557 kernel: ... max period: 00007fffffffffff Oct 2 19:31:10.867564 kernel: ... fixed-purpose events: 0 Oct 2 19:31:10.867571 kernel: ... event mask: 000000000000003f Oct 2 19:31:10.867578 kernel: signal: max sigframe size: 1776 Oct 2 19:31:10.867586 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:31:10.867593 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:31:10.867600 kernel: x86: Booting SMP configuration: Oct 2 19:31:10.867607 kernel: .... node #0, CPUs: #1 Oct 2 19:31:10.867613 kernel: kvm-clock: cpu 1, msr 60f8a041, secondary cpu clock Oct 2 19:31:10.867620 kernel: kvm-guest: setup async PF for cpu 1 Oct 2 19:31:10.867627 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Oct 2 19:31:10.867634 kernel: #2 Oct 2 19:31:10.867641 kernel: kvm-clock: cpu 2, msr 60f8a081, secondary cpu clock Oct 2 19:31:10.867648 kernel: kvm-guest: setup async PF for cpu 2 Oct 2 19:31:10.867657 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Oct 2 19:31:10.867663 kernel: #3 Oct 2 19:31:10.867670 kernel: kvm-clock: cpu 3, msr 60f8a0c1, secondary cpu clock Oct 2 19:31:10.867677 kernel: kvm-guest: setup async PF for cpu 3 Oct 2 19:31:10.867683 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Oct 2 19:31:10.867690 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:31:10.867703 kernel: smpboot: Max logical packages: 1 Oct 2 19:31:10.867710 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 2 19:31:10.867717 kernel: devtmpfs: initialized Oct 2 19:31:10.867725 kernel: x86/mm: Memory block size: 128MB Oct 2 19:31:10.867732 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 2 19:31:10.867739 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 2 19:31:10.867746 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 2 19:31:10.867753 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 2 19:31:10.867760 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 2 19:31:10.867767 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:31:10.867774 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:31:10.867781 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:31:10.867790 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:31:10.867796 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:31:10.867803 kernel: audit: type=2000 audit(1696275069.946:1): state=initialized audit_enabled=0 res=1 Oct 2 19:31:10.867810 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:31:10.867817 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:31:10.867824 kernel: cpuidle: using governor menu Oct 2 19:31:10.867831 kernel: ACPI: bus type PCI registered Oct 2 19:31:10.867838 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:31:10.867848 kernel: dca service started, version 1.12.1 Oct 2 19:31:10.867866 kernel: PCI: Using configuration type 1 for base access Oct 2 19:31:10.867875 kernel: PCI: Using configuration type 1 for extended access Oct 2 19:31:10.867884 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:31:10.867893 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:31:10.867900 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:31:10.867907 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:31:10.867914 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:31:10.867921 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:31:10.867927 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:31:10.867937 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:31:10.867944 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:31:10.867950 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:31:10.867957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:31:10.867964 kernel: ACPI: Interpreter enabled Oct 2 19:31:10.867971 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:31:10.867992 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:31:10.868009 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:31:10.868016 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:31:10.868025 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:31:10.868299 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:31:10.868343 kernel: acpiphp: Slot [3] registered Oct 2 19:31:10.868352 kernel: acpiphp: Slot [4] registered Oct 2 19:31:10.868360 kernel: acpiphp: Slot [5] registered Oct 2 19:31:10.868368 kernel: acpiphp: Slot [6] registered Oct 2 19:31:10.868375 kernel: acpiphp: Slot [7] registered Oct 2 19:31:10.868384 kernel: acpiphp: Slot [8] registered Oct 2 19:31:10.868398 kernel: acpiphp: Slot [9] registered Oct 2 19:31:10.868406 kernel: acpiphp: Slot [10] registered Oct 2 19:31:10.868413 kernel: acpiphp: Slot [11] registered Oct 2 19:31:10.868420 kernel: acpiphp: Slot [12] registered Oct 2 19:31:10.868427 kernel: acpiphp: Slot [13] registered Oct 2 19:31:10.868435 kernel: acpiphp: Slot [14] registered Oct 2 19:31:10.868442 kernel: acpiphp: Slot [15] registered Oct 2 19:31:10.868449 kernel: acpiphp: Slot [16] registered Oct 2 19:31:10.868457 kernel: acpiphp: Slot [17] registered Oct 2 19:31:10.868464 kernel: acpiphp: Slot [18] registered Oct 2 19:31:10.868473 kernel: acpiphp: Slot [19] registered Oct 2 19:31:10.868480 kernel: acpiphp: Slot [20] registered Oct 2 19:31:10.868488 kernel: acpiphp: Slot [21] registered Oct 2 19:31:10.868495 kernel: acpiphp: Slot [22] registered Oct 2 19:31:10.868515 kernel: acpiphp: Slot [23] registered Oct 2 19:31:10.868523 kernel: acpiphp: Slot [24] registered Oct 2 19:31:10.868530 kernel: acpiphp: Slot [25] registered Oct 2 19:31:10.868537 kernel: acpiphp: Slot [26] registered Oct 2 19:31:10.868544 kernel: acpiphp: Slot [27] registered Oct 2 19:31:10.868554 kernel: acpiphp: Slot [28] registered Oct 2 19:31:10.868561 kernel: acpiphp: Slot [29] registered Oct 2 19:31:10.868568 kernel: acpiphp: Slot [30] registered Oct 2 19:31:10.868576 kernel: acpiphp: Slot [31] registered Oct 2 19:31:10.868583 kernel: PCI host bridge to bus 0000:00 Oct 2 19:31:10.868683 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:31:10.868775 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:31:10.868842 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:31:10.868914 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Oct 2 19:31:10.868981 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Oct 2 19:31:10.869045 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:31:10.869142 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:31:10.869231 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:31:10.869318 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:31:10.869454 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Oct 2 19:31:10.869548 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:31:10.869624 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:31:10.869705 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:31:10.869862 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:31:10.869967 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:31:10.870045 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 2 19:31:10.870129 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Oct 2 19:31:10.870220 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Oct 2 19:31:10.870295 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 2 19:31:10.870368 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Oct 2 19:31:10.870441 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 2 19:31:10.870543 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Oct 2 19:31:10.870633 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:31:10.870728 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:31:10.870830 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Oct 2 19:31:10.873291 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 2 19:31:10.873410 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 2 19:31:10.873497 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:31:10.873605 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:31:10.873724 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 2 19:31:10.873808 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 2 19:31:10.873911 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:31:10.873988 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:31:10.874062 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Oct 2 19:31:10.874147 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 2 19:31:10.874220 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 2 19:31:10.874229 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:31:10.874240 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:31:10.874248 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:31:10.874255 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:31:10.874262 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:31:10.874269 kernel: iommu: Default domain type: Translated Oct 2 19:31:10.874276 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:31:10.874366 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:31:10.874442 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:31:10.875157 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:31:10.875178 kernel: vgaarb: loaded Oct 2 19:31:10.875186 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:31:10.875193 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:31:10.875200 kernel: PTP clock support registered Oct 2 19:31:10.875208 kernel: Registered efivars operations Oct 2 19:31:10.875215 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:31:10.875222 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:31:10.875230 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 2 19:31:10.875237 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 2 19:31:10.875245 kernel: e820: reserve RAM buffer [mem 0x9b1ac018-0x9bffffff] Oct 2 19:31:10.875252 kernel: e820: reserve RAM buffer [mem 0x9b3f9018-0x9bffffff] Oct 2 19:31:10.875259 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 2 19:31:10.875266 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 2 19:31:10.875273 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 2 19:31:10.875280 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 2 19:31:10.875287 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:31:10.875297 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:31:10.875307 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:31:10.875319 kernel: pnp: PnP ACPI init Oct 2 19:31:10.876607 kernel: pnp 00:02: [dma 2] Oct 2 19:31:10.876647 kernel: pnp: PnP ACPI: found 6 devices Oct 2 19:31:10.876656 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:31:10.876665 kernel: NET: Registered PF_INET protocol family Oct 2 19:31:10.876673 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:31:10.876680 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:31:10.876688 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:31:10.876710 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:31:10.876717 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:31:10.876725 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:31:10.876732 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:31:10.876740 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:31:10.876747 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:31:10.876754 kernel: NET: Registered PF_XDP protocol family Oct 2 19:31:10.876881 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 2 19:31:10.876992 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 2 19:31:10.877069 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:31:10.877136 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:31:10.877201 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:31:10.877266 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Oct 2 19:31:10.877331 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Oct 2 19:31:10.877434 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:31:10.877539 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:31:10.877618 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:31:10.877628 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:31:10.877636 kernel: Initialise system trusted keyrings Oct 2 19:31:10.877644 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:31:10.877652 kernel: Key type asymmetric registered Oct 2 19:31:10.877659 kernel: Asymmetric key parser 'x509' registered Oct 2 19:31:10.877667 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:31:10.877674 kernel: io scheduler mq-deadline registered Oct 2 19:31:10.877682 kernel: io scheduler kyber registered Oct 2 19:31:10.877698 kernel: io scheduler bfq registered Oct 2 19:31:10.877706 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:31:10.877714 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:31:10.877722 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:31:10.877730 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:31:10.877737 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:31:10.877745 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:31:10.877753 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:31:10.877760 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:31:10.877770 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:31:10.877777 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:31:10.877889 kernel: rtc_cmos 00:05: RTC can wake from S4 Oct 2 19:31:10.877993 kernel: rtc_cmos 00:05: registered as rtc0 Oct 2 19:31:10.878088 kernel: rtc_cmos 00:05: setting system clock to 2023-10-02T19:31:10 UTC (1696275070) Oct 2 19:31:10.878158 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 2 19:31:10.878167 kernel: efifb: probing for efifb Oct 2 19:31:10.878175 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 2 19:31:10.878183 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 2 19:31:10.878190 kernel: efifb: scrolling: redraw Oct 2 19:31:10.878198 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 2 19:31:10.878206 kernel: Console: switching to colour frame buffer device 160x50 Oct 2 19:31:10.878214 kernel: fb0: EFI VGA frame buffer device Oct 2 19:31:10.878224 kernel: pstore: Registered efi as persistent store backend Oct 2 19:31:10.878231 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:31:10.878239 kernel: Segment Routing with IPv6 Oct 2 19:31:10.878247 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:31:10.878254 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:31:10.878262 kernel: Key type dns_resolver registered Oct 2 19:31:10.878269 kernel: IPI shorthand broadcast: enabled Oct 2 19:31:10.878277 kernel: sched_clock: Marking stable (438114836, 88223810)->(548013577, -21674931) Oct 2 19:31:10.878284 kernel: registered taskstats version 1 Oct 2 19:31:10.878294 kernel: Loading compiled-in X.509 certificates Oct 2 19:31:10.878301 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:31:10.878309 kernel: Key type .fscrypt registered Oct 2 19:31:10.878316 kernel: Key type fscrypt-provisioning registered Oct 2 19:31:10.878324 kernel: pstore: Using crash dump compression: deflate Oct 2 19:31:10.878331 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:31:10.878339 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:31:10.878348 kernel: ima: No architecture policies found Oct 2 19:31:10.878355 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:31:10.878364 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:31:10.878372 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:31:10.878380 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:31:10.878387 kernel: Run /init as init process Oct 2 19:31:10.878395 kernel: with arguments: Oct 2 19:31:10.878402 kernel: /init Oct 2 19:31:10.878410 kernel: with environment: Oct 2 19:31:10.878419 kernel: HOME=/ Oct 2 19:31:10.878426 kernel: TERM=linux Oct 2 19:31:10.878433 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:31:10.878445 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:31:10.878455 systemd[1]: Detected virtualization kvm. Oct 2 19:31:10.878463 systemd[1]: Detected architecture x86-64. Oct 2 19:31:10.878473 systemd[1]: Running in initrd. Oct 2 19:31:10.878490 systemd[1]: No hostname configured, using default hostname. Oct 2 19:31:10.878541 systemd[1]: Hostname set to . Oct 2 19:31:10.878565 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:31:10.878576 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:31:10.878585 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:31:10.878593 systemd[1]: Reached target cryptsetup.target. Oct 2 19:31:10.878601 systemd[1]: Reached target paths.target. Oct 2 19:31:10.878609 systemd[1]: Reached target slices.target. Oct 2 19:31:10.878616 systemd[1]: Reached target swap.target. Oct 2 19:31:10.878624 systemd[1]: Reached target timers.target. Oct 2 19:31:10.878635 systemd[1]: Listening on iscsid.socket. Oct 2 19:31:10.878643 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:31:10.878651 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:31:10.878659 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:31:10.878666 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:31:10.878675 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:31:10.878682 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:31:10.878691 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:31:10.878704 systemd[1]: Reached target sockets.target. Oct 2 19:31:10.878714 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:31:10.878722 systemd[1]: Finished network-cleanup.service. Oct 2 19:31:10.878730 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:31:10.878738 systemd[1]: Starting systemd-journald.service... Oct 2 19:31:10.878746 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:31:10.878754 systemd[1]: Starting systemd-resolved.service... Oct 2 19:31:10.878762 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:31:10.878770 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:31:10.878778 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:31:10.878788 kernel: audit: type=1130 audit(1696275070.865:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.878796 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:31:10.878804 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:31:10.878812 kernel: audit: type=1130 audit(1696275070.868:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.878820 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:31:10.878833 systemd-journald[198]: Journal started Oct 2 19:31:10.878887 systemd-journald[198]: Runtime Journal (/run/log/journal/c029a50dbc0345f8820b62803c31d964) is 6.0M, max 48.4M, 42.4M free. Oct 2 19:31:10.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.868723 systemd-modules-load[199]: Inserted module 'overlay' Oct 2 19:31:10.881517 systemd[1]: Started systemd-journald.service. Oct 2 19:31:10.881861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:31:10.886868 kernel: audit: type=1130 audit(1696275070.880:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.886888 kernel: audit: type=1130 audit(1696275070.882:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.891520 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:31:10.893260 systemd-modules-load[199]: Inserted module 'br_netfilter' Oct 2 19:31:10.893520 kernel: Bridge firewalling registered Oct 2 19:31:10.894500 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:31:10.897913 kernel: audit: type=1130 audit(1696275070.894:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.897412 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:31:10.903156 systemd-resolved[200]: Positive Trust Anchors: Oct 2 19:31:10.903175 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:31:10.903205 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:31:10.906423 systemd-resolved[200]: Defaulting to hostname 'linux'. Oct 2 19:31:10.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.913009 dracut-cmdline[215]: dracut-dracut-053 Oct 2 19:31:10.913009 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:31:10.917144 kernel: audit: type=1130 audit(1696275070.909:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.917167 kernel: SCSI subsystem initialized Oct 2 19:31:10.908281 systemd[1]: Started systemd-resolved.service. Oct 2 19:31:10.909811 systemd[1]: Reached target nss-lookup.target. Oct 2 19:31:10.924611 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:31:10.924643 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:31:10.925535 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:31:10.928134 systemd-modules-load[199]: Inserted module 'dm_multipath' Oct 2 19:31:10.928932 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:31:10.930945 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:31:10.934372 kernel: audit: type=1130 audit(1696275070.929:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.941308 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:31:10.944229 kernel: audit: type=1130 audit(1696275070.940:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:10.983552 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:31:10.998528 kernel: iscsi: registered transport (tcp) Oct 2 19:31:11.018545 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:31:11.018567 kernel: QLogic iSCSI HBA Driver Oct 2 19:31:11.051890 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:31:11.055679 kernel: audit: type=1130 audit(1696275071.052:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:11.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:11.053436 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:31:11.098536 kernel: raid6: avx2x4 gen() 31019 MB/s Oct 2 19:31:11.115522 kernel: raid6: avx2x4 xor() 8285 MB/s Oct 2 19:31:11.132543 kernel: raid6: avx2x2 gen() 32617 MB/s Oct 2 19:31:11.149535 kernel: raid6: avx2x2 xor() 18954 MB/s Oct 2 19:31:11.166519 kernel: raid6: avx2x1 gen() 23976 MB/s Oct 2 19:31:11.183523 kernel: raid6: avx2x1 xor() 11314 MB/s Oct 2 19:31:11.200536 kernel: raid6: sse2x4 gen() 10472 MB/s Oct 2 19:31:11.217524 kernel: raid6: sse2x4 xor() 5845 MB/s Oct 2 19:31:11.234539 kernel: raid6: sse2x2 gen() 10712 MB/s Oct 2 19:31:11.251522 kernel: raid6: sse2x2 xor() 8824 MB/s Oct 2 19:31:11.268531 kernel: raid6: sse2x1 gen() 11928 MB/s Oct 2 19:31:11.285587 kernel: raid6: sse2x1 xor() 4732 MB/s Oct 2 19:31:11.285619 kernel: raid6: using algorithm avx2x2 gen() 32617 MB/s Oct 2 19:31:11.285628 kernel: raid6: .... xor() 18954 MB/s, rmw enabled Oct 2 19:31:11.287167 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:31:11.298532 kernel: xor: automatically using best checksumming function avx Oct 2 19:31:11.392547 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:31:11.400984 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:31:11.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:11.401000 audit: BPF prog-id=7 op=LOAD Oct 2 19:31:11.401000 audit: BPF prog-id=8 op=LOAD Oct 2 19:31:11.402619 systemd[1]: Starting systemd-udevd.service... Oct 2 19:31:11.415251 systemd-udevd[400]: Using default interface naming scheme 'v252'. Oct 2 19:31:11.421647 systemd[1]: Started systemd-udevd.service. Oct 2 19:31:11.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:11.425240 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:31:11.437978 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Oct 2 19:31:11.468541 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:31:11.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:11.470542 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:31:11.514025 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:31:11.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:11.537532 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:31:11.549568 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:31:11.552524 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:31:11.557525 kernel: libata version 3.00 loaded. Oct 2 19:31:11.572342 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:31:11.572557 kernel: scsi host0: ata_piix Oct 2 19:31:11.573697 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:31:11.573718 kernel: scsi host1: ata_piix Oct 2 19:31:11.574519 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Oct 2 19:31:11.574669 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Oct 2 19:31:11.579519 kernel: AES CTR mode by8 optimization enabled Oct 2 19:31:11.594028 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:31:11.596531 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Oct 2 19:31:11.596202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:31:11.600799 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:31:11.605028 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:31:11.613953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:31:11.615961 systemd[1]: Starting disk-uuid.service... Oct 2 19:31:11.623544 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:31:11.626531 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:31:11.732566 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 2 19:31:11.732641 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 2 19:31:11.764626 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 2 19:31:11.764826 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:31:11.782552 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:31:12.631288 disk-uuid[524]: The operation has completed successfully. Oct 2 19:31:12.632486 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:31:12.653314 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:31:12.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.653408 systemd[1]: Finished disk-uuid.service. Oct 2 19:31:12.659761 systemd[1]: Starting verity-setup.service... Oct 2 19:31:12.674527 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:31:12.711914 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:31:12.713287 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:31:12.715689 systemd[1]: Finished verity-setup.service. Oct 2 19:31:12.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.788528 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:31:12.788537 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:31:12.789296 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:31:12.790024 systemd[1]: Starting ignition-setup.service... Oct 2 19:31:12.791606 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:31:12.797699 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:31:12.797727 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:31:12.797737 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:31:12.805223 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:31:12.812158 systemd[1]: Finished ignition-setup.service. Oct 2 19:31:12.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.813870 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:31:12.853126 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:31:12.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.854000 audit: BPF prog-id=9 op=LOAD Oct 2 19:31:12.855608 systemd[1]: Starting systemd-networkd.service... Oct 2 19:31:12.875752 systemd-networkd[702]: lo: Link UP Oct 2 19:31:12.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.875761 systemd-networkd[702]: lo: Gained carrier Oct 2 19:31:12.876179 systemd-networkd[702]: Enumeration completed Oct 2 19:31:12.876263 systemd[1]: Started systemd-networkd.service. Oct 2 19:31:12.876371 systemd-networkd[702]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:31:12.877327 systemd[1]: Reached target network.target. Oct 2 19:31:12.878647 systemd[1]: Starting iscsiuio.service... Oct 2 19:31:12.883168 systemd-networkd[702]: eth0: Link UP Oct 2 19:31:12.883172 systemd-networkd[702]: eth0: Gained carrier Oct 2 19:31:12.892775 systemd[1]: Started iscsiuio.service. Oct 2 19:31:12.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.894192 systemd[1]: Starting iscsid.service... Oct 2 19:31:12.898899 iscsid[708]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:31:12.898899 iscsid[708]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:31:12.898899 iscsid[708]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:31:12.898899 iscsid[708]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:31:12.898899 iscsid[708]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:31:12.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.907494 iscsid[708]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:31:12.901755 systemd[1]: Started iscsid.service. Oct 2 19:31:12.903672 systemd-networkd[702]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:31:12.904971 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:31:12.915876 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:31:12.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.916553 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:31:12.917542 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:31:12.918143 systemd[1]: Reached target remote-fs.target. Oct 2 19:31:12.918738 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:31:12.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.927636 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:31:12.936953 ignition[624]: Ignition 2.14.0 Oct 2 19:31:12.936962 ignition[624]: Stage: fetch-offline Oct 2 19:31:12.937036 ignition[624]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:31:12.937046 ignition[624]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:31:12.937162 ignition[624]: parsed url from cmdline: "" Oct 2 19:31:12.937165 ignition[624]: no config URL provided Oct 2 19:31:12.937170 ignition[624]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:31:12.937177 ignition[624]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:31:12.937192 ignition[624]: op(1): [started] loading QEMU firmware config module Oct 2 19:31:12.937197 ignition[624]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:31:12.942366 ignition[624]: op(1): [finished] loading QEMU firmware config module Oct 2 19:31:12.953117 ignition[624]: parsing config with SHA512: 177ec4a73622fbfefe913212787d91d2e9be11ef36619237aab9803ccfd20f9ada4f614074e725be74fa445a56a89c9b15ae91f9ac74a3910c4d33f03d6a7644 Oct 2 19:31:12.974946 unknown[624]: fetched base config from "system" Oct 2 19:31:12.974960 unknown[624]: fetched user config from "qemu" Oct 2 19:31:12.975135 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.14 Oct 2 19:31:12.975745 ignition[624]: fetch-offline: fetch-offline passed Oct 2 19:31:12.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:12.975149 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Oct 2 19:31:12.975814 ignition[624]: Ignition finished successfully Oct 2 19:31:12.976804 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:31:12.977745 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:31:12.978592 systemd[1]: Starting ignition-kargs.service... Oct 2 19:31:12.999627 ignition[724]: Ignition 2.14.0 Oct 2 19:31:12.999646 ignition[724]: Stage: kargs Oct 2 19:31:12.999780 ignition[724]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:31:12.999795 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:31:13.001392 ignition[724]: kargs: kargs passed Oct 2 19:31:13.003215 systemd[1]: Finished ignition-kargs.service. Oct 2 19:31:13.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:13.001465 ignition[724]: Ignition finished successfully Oct 2 19:31:13.006015 systemd[1]: Starting ignition-disks.service... Oct 2 19:31:13.015454 ignition[730]: Ignition 2.14.0 Oct 2 19:31:13.016437 ignition[730]: Stage: disks Oct 2 19:31:13.017072 ignition[730]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:31:13.017783 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:31:13.027485 ignition[730]: disks: disks passed Oct 2 19:31:13.027576 ignition[730]: Ignition finished successfully Oct 2 19:31:13.028479 systemd[1]: Finished ignition-disks.service. Oct 2 19:31:13.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:13.029367 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:31:13.030447 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:31:13.031091 systemd[1]: Reached target local-fs.target. Oct 2 19:31:13.031611 systemd[1]: Reached target sysinit.target. Oct 2 19:31:13.031653 systemd[1]: Reached target basic.target. Oct 2 19:31:13.032671 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:31:13.042342 systemd-fsck[738]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:31:13.046927 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:31:13.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:13.048013 systemd[1]: Mounting sysroot.mount... Oct 2 19:31:13.054267 systemd[1]: Mounted sysroot.mount. Oct 2 19:31:13.055723 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:31:13.054794 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:31:13.056481 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:31:13.057216 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:31:13.057246 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:31:13.057265 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:31:13.059404 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:31:13.060486 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:31:13.064786 initrd-setup-root[748]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:31:13.068459 initrd-setup-root[756]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:31:13.072095 initrd-setup-root[764]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:31:13.075147 initrd-setup-root[772]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:31:13.100541 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:31:13.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:13.101316 systemd[1]: Starting ignition-mount.service... Oct 2 19:31:13.102734 systemd[1]: Starting sysroot-boot.service... Oct 2 19:31:13.107124 bash[789]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:31:13.114579 ignition[790]: INFO : Ignition 2.14.0 Oct 2 19:31:13.114579 ignition[790]: INFO : Stage: mount Oct 2 19:31:13.115817 ignition[790]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:31:13.115817 ignition[790]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:31:13.115817 ignition[790]: INFO : mount: mount passed Oct 2 19:31:13.115817 ignition[790]: INFO : Ignition finished successfully Oct 2 19:31:13.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:13.116598 systemd[1]: Finished ignition-mount.service. Oct 2 19:31:13.120677 systemd[1]: Finished sysroot-boot.service. Oct 2 19:31:13.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:13.726187 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:31:13.732732 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Oct 2 19:31:13.732768 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:31:13.732778 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:31:13.733720 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:31:13.737075 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:31:13.738822 systemd[1]: Starting ignition-files.service... Oct 2 19:31:13.753421 ignition[819]: INFO : Ignition 2.14.0 Oct 2 19:31:13.753421 ignition[819]: INFO : Stage: files Oct 2 19:31:13.754672 ignition[819]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:31:13.754672 ignition[819]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:31:13.756198 ignition[819]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:31:13.757038 ignition[819]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:31:13.757038 ignition[819]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:31:13.758952 ignition[819]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:31:13.759905 ignition[819]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:31:13.759905 ignition[819]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:31:13.759777 unknown[819]: wrote ssh authorized keys file for user: core Oct 2 19:31:13.762742 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:31:13.762742 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 19:31:13.946441 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:31:14.074737 systemd-networkd[702]: eth0: Gained IPv6LL Oct 2 19:31:14.114139 ignition[819]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 19:31:14.114139 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:31:14.122335 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:31:14.122335 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:31:14.210941 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:31:14.325186 ignition[819]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 19:31:14.325186 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:31:14.328395 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:31:14.328395 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:31:14.422740 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:31:15.137332 ignition[819]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Oct 2 19:31:15.139649 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:31:15.139649 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:31:15.139649 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:31:15.197475 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:31:16.632645 ignition[819]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Oct 2 19:31:16.635152 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:31:16.635152 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:31:16.635152 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:31:16.635152 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:31:16.635152 ignition[819]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(9): [started] processing unit "coreos-metadata.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(9): op(a): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(9): op(a): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(9): [finished] processing unit "coreos-metadata.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(b): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(b): op(c): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(b): op(c): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(b): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(d): [started] processing unit "prepare-critools.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(d): op(e): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(d): op(e): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(d): [finished] processing unit "prepare-critools.service" Oct 2 19:31:16.635152 ignition[819]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:31:16.659964 ignition[819]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:31:16.721343 ignition[819]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:31:16.722614 ignition[819]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:31:16.722614 ignition[819]: INFO : files: op(11): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:31:16.722614 ignition[819]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:31:16.722614 ignition[819]: INFO : files: op(12): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:31:16.722614 ignition[819]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:31:16.722614 ignition[819]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:31:16.722614 ignition[819]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:31:16.722614 ignition[819]: INFO : files: files passed Oct 2 19:31:16.722614 ignition[819]: INFO : Ignition finished successfully Oct 2 19:31:16.744820 kernel: kauditd_printk_skb: 24 callbacks suppressed Oct 2 19:31:16.744847 kernel: audit: type=1130 audit(1696275076.723:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.744858 kernel: audit: type=1130 audit(1696275076.732:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.744868 kernel: audit: type=1130 audit(1696275076.736:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.744877 kernel: audit: type=1131 audit(1696275076.736:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.723377 systemd[1]: Finished ignition-files.service. Oct 2 19:31:16.725316 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:31:16.746612 initrd-setup-root-after-ignition[843]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:31:16.728700 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:31:16.748966 initrd-setup-root-after-ignition[846]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:31:16.729616 systemd[1]: Starting ignition-quench.service... Oct 2 19:31:16.732495 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:31:16.733558 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:31:16.733641 systemd[1]: Finished ignition-quench.service. Oct 2 19:31:16.736913 systemd[1]: Reached target ignition-complete.target. Oct 2 19:31:16.742889 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:31:16.754499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:31:16.754597 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:31:16.760301 kernel: audit: type=1130 audit(1696275076.755:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.760317 kernel: audit: type=1131 audit(1696275076.755:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.755734 systemd[1]: Reached target initrd-fs.target. Oct 2 19:31:16.760325 systemd[1]: Reached target initrd.target. Oct 2 19:31:16.760850 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:31:16.761499 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:31:16.771131 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:31:16.774730 kernel: audit: type=1130 audit(1696275076.771:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.772582 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:31:16.780768 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:31:16.781342 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:31:16.782382 systemd[1]: Stopped target timers.target. Oct 2 19:31:16.783613 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:31:16.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.783750 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:31:16.788559 kernel: audit: type=1131 audit(1696275076.784:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.784672 systemd[1]: Stopped target initrd.target. Oct 2 19:31:16.787706 systemd[1]: Stopped target basic.target. Oct 2 19:31:16.788701 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:31:16.789915 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:31:16.790994 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:31:16.792453 systemd[1]: Stopped target remote-fs.target. Oct 2 19:31:16.793770 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:31:16.795243 systemd[1]: Stopped target sysinit.target. Oct 2 19:31:16.796434 systemd[1]: Stopped target local-fs.target. Oct 2 19:31:16.797785 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:31:16.799087 systemd[1]: Stopped target swap.target. Oct 2 19:31:16.804597 kernel: audit: type=1131 audit(1696275076.801:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.800274 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:31:16.800402 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:31:16.808909 kernel: audit: type=1131 audit(1696275076.805:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.801738 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:31:16.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.804675 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:31:16.804804 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:31:16.806128 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:31:16.806256 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:31:16.809123 systemd[1]: Stopped target paths.target. Oct 2 19:31:16.810001 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:31:16.811565 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:31:16.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.812634 systemd[1]: Stopped target slices.target. Oct 2 19:31:16.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.813634 systemd[1]: Stopped target sockets.target. Oct 2 19:31:16.814594 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:31:16.819624 iscsid[708]: iscsid shutting down. Oct 2 19:31:16.814693 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:31:16.815898 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:31:16.815981 systemd[1]: Stopped ignition-files.service. Oct 2 19:31:16.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.817799 systemd[1]: Stopping ignition-mount.service... Oct 2 19:31:16.818547 systemd[1]: Stopping iscsid.service... Oct 2 19:31:16.820201 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:31:16.820679 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:31:16.820832 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:31:16.828052 ignition[860]: INFO : Ignition 2.14.0 Oct 2 19:31:16.828052 ignition[860]: INFO : Stage: umount Oct 2 19:31:16.828052 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:31:16.828052 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:31:16.822264 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:31:16.831410 ignition[860]: INFO : umount: umount passed Oct 2 19:31:16.831410 ignition[860]: INFO : Ignition finished successfully Oct 2 19:31:16.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.822373 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:31:16.829881 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:31:16.831976 systemd[1]: Stopped iscsid.service. Oct 2 19:31:16.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.833081 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:31:16.833159 systemd[1]: Stopped ignition-mount.service. Oct 2 19:31:16.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.834937 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:31:16.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.835439 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:31:16.835528 systemd[1]: Closed iscsid.socket. Oct 2 19:31:16.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.836114 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:31:16.836161 systemd[1]: Stopped ignition-disks.service. Oct 2 19:31:16.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.837228 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:31:16.837258 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:31:16.838191 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:31:16.838220 systemd[1]: Stopped ignition-setup.service. Oct 2 19:31:16.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.838824 systemd[1]: Stopping iscsiuio.service... Oct 2 19:31:16.839881 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:31:16.839948 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:31:16.840945 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:31:16.841009 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:31:16.841773 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:31:16.841846 systemd[1]: Stopped iscsiuio.service. Oct 2 19:31:16.843512 systemd[1]: Stopped target network.target. Oct 2 19:31:16.844423 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:31:16.844452 systemd[1]: Closed iscsiuio.socket. Oct 2 19:31:16.845344 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:31:16.845382 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:31:16.846630 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:31:16.847620 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:31:16.854568 systemd-networkd[702]: eth0: DHCPv6 lease lost Oct 2 19:31:16.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.855394 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:31:16.855474 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:31:16.857321 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:31:16.857348 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:31:16.860000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:31:16.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.859091 systemd[1]: Stopping network-cleanup.service... Oct 2 19:31:16.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.860089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:31:16.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.860128 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:31:16.861165 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:31:16.861198 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:31:16.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.862270 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:31:16.862301 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:31:16.863572 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:31:16.866142 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:31:16.866536 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:31:16.866613 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:31:16.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.871097 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:31:16.872000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:31:16.871213 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:31:16.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.872936 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:31:16.873006 systemd[1]: Stopped network-cleanup.service. Oct 2 19:31:16.874097 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:31:16.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.874130 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:31:16.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.875013 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:31:16.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.875040 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:31:16.876161 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:31:16.876194 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:31:16.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.877194 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:31:16.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.877227 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:31:16.878350 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:31:16.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:16.878394 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:31:16.880097 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:31:16.881158 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:31:16.881207 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:31:16.882499 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:31:16.882550 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:31:16.883520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:31:16.883566 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:31:16.884985 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:31:16.885343 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:31:16.885409 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:31:16.886674 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:31:16.888260 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:31:16.903544 systemd[1]: Switching root. Oct 2 19:31:16.922519 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Oct 2 19:31:16.922571 systemd-journald[198]: Journal stopped Oct 2 19:31:20.535757 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:31:20.535812 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:31:20.535823 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:31:20.535833 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:31:20.535843 kernel: SELinux: policy capability open_perms=1 Oct 2 19:31:20.535852 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:31:20.535862 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:31:20.535872 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:31:20.535883 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:31:20.535897 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:31:20.535907 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:31:20.535918 systemd[1]: Successfully loaded SELinux policy in 45.674ms. Oct 2 19:31:20.535941 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.842ms. Oct 2 19:31:20.535953 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:31:20.535963 systemd[1]: Detected virtualization kvm. Oct 2 19:31:20.535976 systemd[1]: Detected architecture x86-64. Oct 2 19:31:20.535986 systemd[1]: Detected first boot. Oct 2 19:31:20.535997 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:31:20.536007 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:31:20.536018 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:31:20.536029 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:31:20.536040 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:31:20.536051 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:31:20.536064 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:31:20.536074 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:31:20.536084 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:31:20.536094 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:31:20.536104 systemd[1]: Created slice system-getty.slice. Oct 2 19:31:20.536114 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:31:20.536125 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:31:20.536135 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:31:20.536146 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:31:20.536161 systemd[1]: Created slice user.slice. Oct 2 19:31:20.536172 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:31:20.536182 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:31:20.536192 systemd[1]: Set up automount boot.automount. Oct 2 19:31:20.536203 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:31:20.536213 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:31:20.536227 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:31:20.536240 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:31:20.536251 systemd[1]: Reached target integritysetup.target. Oct 2 19:31:20.536263 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:31:20.536274 systemd[1]: Reached target remote-fs.target. Oct 2 19:31:20.536284 systemd[1]: Reached target slices.target. Oct 2 19:31:20.536294 systemd[1]: Reached target swap.target. Oct 2 19:31:20.536305 systemd[1]: Reached target torcx.target. Oct 2 19:31:20.536316 systemd[1]: Reached target veritysetup.target. Oct 2 19:31:20.536328 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:31:20.536341 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:31:20.536353 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:31:20.536363 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:31:20.536373 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:31:20.536384 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:31:20.536394 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:31:20.536404 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:31:20.536414 systemd[1]: Mounting media.mount... Oct 2 19:31:20.536425 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:31:20.536435 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:31:20.536453 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:31:20.536465 systemd[1]: Mounting tmp.mount... Oct 2 19:31:20.536477 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:31:20.536491 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:31:20.536534 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:31:20.536549 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:31:20.536563 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:31:20.536577 systemd[1]: Starting modprobe@drm.service... Oct 2 19:31:20.536591 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:31:20.536607 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:31:20.536620 systemd[1]: Starting modprobe@loop.service... Oct 2 19:31:20.536630 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:31:20.536641 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:31:20.536651 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:31:20.536661 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:31:20.536672 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:31:20.536682 systemd[1]: Stopped systemd-journald.service. Oct 2 19:31:20.536692 systemd[1]: Starting systemd-journald.service... Oct 2 19:31:20.536702 kernel: fuse: init (API version 7.34) Oct 2 19:31:20.536713 kernel: loop: module loaded Oct 2 19:31:20.536723 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:31:20.536733 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:31:20.536743 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:31:20.536753 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:31:20.536763 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:31:20.536773 systemd[1]: Stopped verity-setup.service. Oct 2 19:31:20.536784 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:31:20.536794 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:31:20.536806 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:31:20.536816 systemd[1]: Mounted media.mount. Oct 2 19:31:20.536826 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:31:20.536839 systemd-journald[971]: Journal started Oct 2 19:31:20.536878 systemd-journald[971]: Runtime Journal (/run/log/journal/c029a50dbc0345f8820b62803c31d964) is 6.0M, max 48.4M, 42.4M free. Oct 2 19:31:17.009000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:31:17.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:31:17.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:31:17.392000 audit: BPF prog-id=10 op=LOAD Oct 2 19:31:17.392000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:31:17.392000 audit: BPF prog-id=11 op=LOAD Oct 2 19:31:17.392000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:31:20.401000 audit: BPF prog-id=12 op=LOAD Oct 2 19:31:20.401000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:31:20.401000 audit: BPF prog-id=13 op=LOAD Oct 2 19:31:20.401000 audit: BPF prog-id=14 op=LOAD Oct 2 19:31:20.401000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:31:20.401000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:31:20.402000 audit: BPF prog-id=15 op=LOAD Oct 2 19:31:20.402000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:31:20.402000 audit: BPF prog-id=16 op=LOAD Oct 2 19:31:20.403000 audit: BPF prog-id=17 op=LOAD Oct 2 19:31:20.403000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:31:20.403000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:31:20.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.412000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:31:20.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.512000 audit: BPF prog-id=18 op=LOAD Oct 2 19:31:20.512000 audit: BPF prog-id=19 op=LOAD Oct 2 19:31:20.512000 audit: BPF prog-id=20 op=LOAD Oct 2 19:31:20.512000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:31:20.512000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:31:20.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.534000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:31:20.534000 audit[971]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffeacb1fc80 a2=4000 a3=7ffeacb1fd1c items=0 ppid=1 pid=971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:20.534000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:31:17.460577 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:31:20.400710 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:31:17.460902 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:31:20.400720 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:31:17.460927 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:31:20.404142 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:31:17.460966 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:31:17.460980 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:31:17.461017 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:31:17.461033 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:31:17.461277 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:31:17.461321 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:31:17.461337 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:31:17.461731 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:31:17.461774 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:31:17.461796 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:31:17.461814 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:31:17.461834 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:31:17.461851 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:31:20.093582 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:20Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:31:20.093893 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:20Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:31:20.094005 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:20Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:31:20.094192 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:20Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:31:20.094237 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:20Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:31:20.094300 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2023-10-02T19:31:20Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:31:20.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.540680 systemd[1]: Started systemd-journald.service. Oct 2 19:31:20.539985 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:31:20.540826 systemd[1]: Mounted tmp.mount. Oct 2 19:31:20.541736 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:31:20.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.542579 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:31:20.542714 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:31:20.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.543498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:31:20.543659 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:31:20.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.544628 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:31:20.544755 systemd[1]: Finished modprobe@drm.service. Oct 2 19:31:20.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.545744 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:31:20.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.546521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:31:20.546644 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:31:20.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.547412 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:31:20.547634 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:31:20.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.548404 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:31:20.548559 systemd[1]: Finished modprobe@loop.service. Oct 2 19:31:20.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.549326 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:31:20.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.550151 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:31:20.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.551026 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:31:20.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.552112 systemd[1]: Reached target network-pre.target. Oct 2 19:31:20.553731 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:31:20.555804 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:31:20.556712 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:31:20.558733 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:31:20.561290 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:31:20.562265 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:31:20.563704 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:31:20.564627 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:31:20.566079 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:31:20.569372 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:31:20.570761 systemd-journald[971]: Time spent on flushing to /var/log/journal/c029a50dbc0345f8820b62803c31d964 is 21.745ms for 1155 entries. Oct 2 19:31:20.570761 systemd-journald[971]: System Journal (/var/log/journal/c029a50dbc0345f8820b62803c31d964) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:31:20.598078 systemd-journald[971]: Received client request to flush runtime journal. Oct 2 19:31:20.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.572830 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:31:20.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:20.573938 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:31:20.575018 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:31:20.576058 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:31:20.586939 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:31:20.588035 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:31:20.590407 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:31:20.593867 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:31:20.596148 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:31:20.599217 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:31:20.604632 udevadm[1001]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:31:20.611872 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:31:20.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.304561 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:31:21.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.305000 audit: BPF prog-id=21 op=LOAD Oct 2 19:31:21.305000 audit: BPF prog-id=22 op=LOAD Oct 2 19:31:21.305000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:31:21.305000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:31:21.306997 systemd[1]: Starting systemd-udevd.service... Oct 2 19:31:21.323332 systemd-udevd[1003]: Using default interface naming scheme 'v252'. Oct 2 19:31:21.345746 systemd[1]: Started systemd-udevd.service. Oct 2 19:31:21.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.347000 audit: BPF prog-id=23 op=LOAD Oct 2 19:31:21.349039 systemd[1]: Starting systemd-networkd.service... Oct 2 19:31:21.353000 audit: BPF prog-id=24 op=LOAD Oct 2 19:31:21.354000 audit: BPF prog-id=25 op=LOAD Oct 2 19:31:21.354000 audit: BPF prog-id=26 op=LOAD Oct 2 19:31:21.355264 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:31:21.366489 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:31:21.389586 systemd[1]: Started systemd-userdbd.service. Oct 2 19:31:21.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.401812 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:31:21.432539 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:31:21.431000 audit[1016]: AVC avc: denied { confidentiality } for pid=1016 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:31:21.431000 audit[1016]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d895b00250 a1=32194 a2=7ff73ce6fbc5 a3=5 items=106 ppid=1003 pid=1016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:21.431000 audit: CWD cwd="/" Oct 2 19:31:21.431000 audit: PATH item=0 name=(null) inode=10130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=1 name=(null) inode=10131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=2 name=(null) inode=10130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=3 name=(null) inode=10132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=4 name=(null) inode=10130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=5 name=(null) inode=10133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=6 name=(null) inode=10133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=7 name=(null) inode=10134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=8 name=(null) inode=10133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=9 name=(null) inode=10135 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=10 name=(null) inode=10133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=11 name=(null) inode=10136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=12 name=(null) inode=10133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=13 name=(null) inode=10137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=14 name=(null) inode=10133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=15 name=(null) inode=10138 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=16 name=(null) inode=10130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=17 name=(null) inode=10139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=18 name=(null) inode=10139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=19 name=(null) inode=10140 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=20 name=(null) inode=10139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=21 name=(null) inode=10141 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=22 name=(null) inode=10139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=23 name=(null) inode=10142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=24 name=(null) inode=10139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=25 name=(null) inode=10143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=26 name=(null) inode=10139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=27 name=(null) inode=10144 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=28 name=(null) inode=10130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=29 name=(null) inode=10145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=30 name=(null) inode=10145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=31 name=(null) inode=10146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=32 name=(null) inode=10145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=33 name=(null) inode=10147 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=34 name=(null) inode=10145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=35 name=(null) inode=10148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=36 name=(null) inode=10145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=37 name=(null) inode=10149 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=38 name=(null) inode=10145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=39 name=(null) inode=10150 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=40 name=(null) inode=10130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=41 name=(null) inode=10151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=42 name=(null) inode=10151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=43 name=(null) inode=10152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=44 name=(null) inode=10151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=45 name=(null) inode=10153 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=46 name=(null) inode=10151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=47 name=(null) inode=10154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=48 name=(null) inode=10151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=49 name=(null) inode=10155 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=50 name=(null) inode=10151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=51 name=(null) inode=10156 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=52 name=(null) inode=1040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=53 name=(null) inode=10157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=54 name=(null) inode=10157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=55 name=(null) inode=10158 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=56 name=(null) inode=10157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=57 name=(null) inode=10159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=58 name=(null) inode=10157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=59 name=(null) inode=10160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=60 name=(null) inode=10160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=61 name=(null) inode=10161 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.440965 systemd-networkd[1014]: lo: Link UP Oct 2 19:31:21.440969 systemd-networkd[1014]: lo: Gained carrier Oct 2 19:31:21.431000 audit: PATH item=62 name=(null) inode=10160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=63 name=(null) inode=10162 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.441495 systemd-networkd[1014]: Enumeration completed Oct 2 19:31:21.431000 audit: PATH item=64 name=(null) inode=10160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.441613 systemd[1]: Started systemd-networkd.service. Oct 2 19:31:21.441630 systemd-networkd[1014]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:31:21.442721 systemd-networkd[1014]: eth0: Link UP Oct 2 19:31:21.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.442725 systemd-networkd[1014]: eth0: Gained carrier Oct 2 19:31:21.431000 audit: PATH item=65 name=(null) inode=10163 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=66 name=(null) inode=10160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=67 name=(null) inode=10164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=68 name=(null) inode=10160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=69 name=(null) inode=10165 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=70 name=(null) inode=10157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=71 name=(null) inode=10166 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=72 name=(null) inode=10166 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=73 name=(null) inode=10167 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=74 name=(null) inode=10166 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=75 name=(null) inode=10168 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=76 name=(null) inode=10166 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=77 name=(null) inode=10169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=78 name=(null) inode=10166 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=79 name=(null) inode=10170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=80 name=(null) inode=10166 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=81 name=(null) inode=10171 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=82 name=(null) inode=10157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=83 name=(null) inode=10172 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=84 name=(null) inode=10172 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=85 name=(null) inode=10173 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=86 name=(null) inode=10172 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=87 name=(null) inode=10174 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=88 name=(null) inode=10172 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=89 name=(null) inode=10175 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=90 name=(null) inode=10172 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=91 name=(null) inode=10176 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=92 name=(null) inode=10172 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=93 name=(null) inode=10177 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=94 name=(null) inode=10157 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=95 name=(null) inode=10178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=96 name=(null) inode=10178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=97 name=(null) inode=10179 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=98 name=(null) inode=10178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=99 name=(null) inode=10180 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=100 name=(null) inode=10178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=101 name=(null) inode=10181 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=102 name=(null) inode=10178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=103 name=(null) inode=10182 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=104 name=(null) inode=10178 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PATH item=105 name=(null) inode=10183 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:31:21.431000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:31:21.452543 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:31:21.467530 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:31:21.467746 systemd-networkd[1014]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:31:21.481569 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Oct 2 19:31:21.499538 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:31:21.539560 kernel: kvm: Nested Virtualization enabled Oct 2 19:31:21.539702 kernel: SVM: kvm: Nested Paging enabled Oct 2 19:31:21.556542 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:31:21.576013 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:31:21.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.578248 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:31:21.593663 lvm[1039]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:31:21.624857 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:31:21.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.625652 systemd[1]: Reached target cryptsetup.target. Oct 2 19:31:21.627362 systemd[1]: Starting lvm2-activation.service... Oct 2 19:31:21.631434 lvm[1040]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:31:21.662969 systemd[1]: Finished lvm2-activation.service. Oct 2 19:31:21.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.663883 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:31:21.664673 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:31:21.664719 systemd[1]: Reached target local-fs.target. Oct 2 19:31:21.665374 systemd[1]: Reached target machines.target. Oct 2 19:31:21.667371 systemd[1]: Starting ldconfig.service... Oct 2 19:31:21.668366 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:31:21.668437 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:31:21.669555 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:31:21.671428 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:31:21.673199 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:31:21.674177 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:31:21.674233 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:31:21.675189 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:31:21.676254 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1042 (bootctl) Oct 2 19:31:21.677602 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:31:21.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.687653 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:31:21.694447 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:31:21.695197 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:31:21.696684 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:31:21.762004 systemd-fsck[1050]: fsck.fat 4.2 (2021-01-31) Oct 2 19:31:21.762004 systemd-fsck[1050]: /dev/vda1: 790 files, 115092/258078 clusters Oct 2 19:31:21.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.763055 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:31:21.765193 kernel: kauditd_printk_skb: 218 callbacks suppressed Oct 2 19:31:21.765259 kernel: audit: type=1130 audit(1696275081.763:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.765770 systemd[1]: Mounting boot.mount... Oct 2 19:31:21.804322 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:31:21.804863 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:31:21.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.805651 systemd[1]: Mounted boot.mount. Oct 2 19:31:21.808517 kernel: audit: type=1130 audit(1696275081.805:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.818859 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:31:21.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.822522 kernel: audit: type=1130 audit(1696275081.819:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.871953 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:31:21.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.874022 systemd[1]: Starting audit-rules.service... Oct 2 19:31:21.875599 kernel: audit: type=1130 audit(1696275081.872:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.876850 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:31:21.879000 audit: BPF prog-id=27 op=LOAD Oct 2 19:31:21.882061 kernel: audit: type=1334 audit(1696275081.879:156): prog-id=27 op=LOAD Oct 2 19:31:21.878772 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:31:21.881142 systemd[1]: Starting systemd-resolved.service... Oct 2 19:31:21.881000 audit: BPF prog-id=28 op=LOAD Oct 2 19:31:21.885491 kernel: audit: type=1334 audit(1696275081.881:157): prog-id=28 op=LOAD Oct 2 19:31:21.883252 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:31:21.884872 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:31:21.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.888000 audit[1061]: SYSTEM_BOOT pid=1061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.886699 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:31:21.887684 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:31:21.891055 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:31:21.892728 kernel: audit: type=1130 audit(1696275081.886:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.892762 kernel: audit: type=1127 audit(1696275081.888:159): pid=1061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.896438 kernel: audit: type=1130 audit(1696275081.892:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.903882 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:31:21.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.907523 kernel: audit: type=1130 audit(1696275081.904:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:21.932000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:31:21.932000 audit[1074]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc9db07770 a2=420 a3=0 items=0 ppid=1053 pid=1074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:21.932000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:31:21.933049 augenrules[1074]: No rules Oct 2 19:31:21.933755 systemd[1]: Finished audit-rules.service. Oct 2 19:31:21.936813 systemd-resolved[1059]: Positive Trust Anchors: Oct 2 19:31:21.936835 systemd-resolved[1059]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:31:21.936870 systemd-resolved[1059]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:31:21.940144 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:31:21.940895 systemd[1]: Reached target time-set.target. Oct 2 19:31:21.941256 systemd-timesyncd[1060]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:31:21.941529 systemd-timesyncd[1060]: Initial clock synchronization to Mon 2023-10-02 19:31:21.811568 UTC. Oct 2 19:31:21.954638 systemd-resolved[1059]: Defaulting to hostname 'linux'. Oct 2 19:31:21.956186 systemd[1]: Started systemd-resolved.service. Oct 2 19:31:21.956821 systemd[1]: Reached target network.target. Oct 2 19:31:21.957340 systemd[1]: Reached target nss-lookup.target. Oct 2 19:31:21.989660 ldconfig[1041]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:31:21.995757 systemd[1]: Finished ldconfig.service. Oct 2 19:31:21.997565 systemd[1]: Starting systemd-update-done.service... Oct 2 19:31:22.004189 systemd[1]: Finished systemd-update-done.service. Oct 2 19:31:22.004824 systemd[1]: Reached target sysinit.target. Oct 2 19:31:22.005434 systemd[1]: Started motdgen.path. Oct 2 19:31:22.005903 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:31:22.006716 systemd[1]: Started logrotate.timer. Oct 2 19:31:22.007251 systemd[1]: Started mdadm.timer. Oct 2 19:31:22.007679 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:31:22.008197 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:31:22.008216 systemd[1]: Reached target paths.target. Oct 2 19:31:22.008683 systemd[1]: Reached target timers.target. Oct 2 19:31:22.009374 systemd[1]: Listening on dbus.socket. Oct 2 19:31:22.010720 systemd[1]: Starting docker.socket... Oct 2 19:31:22.013903 systemd[1]: Listening on sshd.socket. Oct 2 19:31:22.014466 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:31:22.014829 systemd[1]: Listening on docker.socket. Oct 2 19:31:22.015366 systemd[1]: Reached target sockets.target. Oct 2 19:31:22.015865 systemd[1]: Reached target basic.target. Oct 2 19:31:22.016380 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:31:22.016399 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:31:22.017311 systemd[1]: Starting containerd.service... Oct 2 19:31:22.018858 systemd[1]: Starting dbus.service... Oct 2 19:31:22.020141 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:31:22.021669 systemd[1]: Starting extend-filesystems.service... Oct 2 19:31:22.022260 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:31:22.023171 systemd[1]: Starting motdgen.service... Oct 2 19:31:22.024760 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:31:22.026225 systemd[1]: Starting prepare-critools.service... Oct 2 19:31:22.026931 jq[1085]: false Oct 2 19:31:22.028451 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:31:22.030235 systemd[1]: Starting sshd-keygen.service... Oct 2 19:31:22.033269 systemd[1]: Starting systemd-logind.service... Oct 2 19:31:22.043788 dbus-daemon[1084]: [system] SELinux support is enabled Oct 2 19:31:22.033860 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:31:22.033918 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:31:22.034295 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:31:22.047149 jq[1103]: true Oct 2 19:31:22.034909 systemd[1]: Starting update-engine.service... Oct 2 19:31:22.036441 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:31:22.047462 tar[1106]: ./ Oct 2 19:31:22.047462 tar[1106]: ./loopback Oct 2 19:31:22.038833 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:31:22.038994 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:31:22.039932 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:31:22.040098 systemd[1]: Finished motdgen.service. Oct 2 19:31:22.044101 systemd[1]: Started dbus.service. Oct 2 19:31:22.047261 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:31:22.047423 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:31:22.055550 tar[1108]: crictl Oct 2 19:31:22.052593 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:31:22.052623 systemd[1]: Reached target system-config.target. Oct 2 19:31:22.053315 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:31:22.053327 systemd[1]: Reached target user-config.target. Oct 2 19:31:22.058278 jq[1110]: true Oct 2 19:31:22.063119 extend-filesystems[1086]: Found sr0 Oct 2 19:31:22.064189 extend-filesystems[1086]: Found vda Oct 2 19:31:22.064189 extend-filesystems[1086]: Found vda1 Oct 2 19:31:22.064189 extend-filesystems[1086]: Found vda2 Oct 2 19:31:22.064189 extend-filesystems[1086]: Found vda3 Oct 2 19:31:22.064189 extend-filesystems[1086]: Found usr Oct 2 19:31:22.064189 extend-filesystems[1086]: Found vda4 Oct 2 19:31:22.064189 extend-filesystems[1086]: Found vda6 Oct 2 19:31:22.064189 extend-filesystems[1086]: Found vda7 Oct 2 19:31:22.064189 extend-filesystems[1086]: Found vda9 Oct 2 19:31:22.064189 extend-filesystems[1086]: Checking size of /dev/vda9 Oct 2 19:31:22.116359 tar[1106]: ./bandwidth Oct 2 19:31:22.119852 extend-filesystems[1086]: Old size kept for /dev/vda9 Oct 2 19:31:22.120467 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:31:22.128152 env[1112]: time="2023-10-02T19:31:22.127451778Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:31:22.120638 systemd[1]: Finished extend-filesystems.service. Oct 2 19:31:22.143883 update_engine[1101]: I1002 19:31:22.143493 1101 main.cc:92] Flatcar Update Engine starting Oct 2 19:31:22.147849 systemd[1]: Started update-engine.service. Oct 2 19:31:22.148410 update_engine[1101]: I1002 19:31:22.148264 1101 update_check_scheduler.cc:74] Next update check in 10m59s Oct 2 19:31:22.157109 tar[1106]: ./ptp Oct 2 19:31:22.167974 env[1112]: time="2023-10-02T19:31:22.167934651Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:31:22.168216 env[1112]: time="2023-10-02T19:31:22.168198331Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:31:22.169315 env[1112]: time="2023-10-02T19:31:22.169291717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:31:22.169396 env[1112]: time="2023-10-02T19:31:22.169377665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:31:22.169674 env[1112]: time="2023-10-02T19:31:22.169655498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:31:22.169765 env[1112]: time="2023-10-02T19:31:22.169747223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:31:22.169844 env[1112]: time="2023-10-02T19:31:22.169823689Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:31:22.169917 env[1112]: time="2023-10-02T19:31:22.169897958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:31:22.170053 env[1112]: time="2023-10-02T19:31:22.170035426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:31:22.170350 env[1112]: time="2023-10-02T19:31:22.170330981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:31:22.170539 env[1112]: time="2023-10-02T19:31:22.170519940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:31:22.170631 env[1112]: time="2023-10-02T19:31:22.170610618Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:31:22.170745 env[1112]: time="2023-10-02T19:31:22.170726767Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:31:22.170848 env[1112]: time="2023-10-02T19:31:22.170809147Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:31:22.176575 systemd[1]: Started locksmithd.service. Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181790003Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181826610Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181839601Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181876533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181890302Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181901480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181912243Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181924150Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181935081Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181948367Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181959219Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.181969716Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.182066496Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:31:22.183384 env[1112]: time="2023-10-02T19:31:22.182142627Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182368636Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182394539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182415769Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182462055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182473184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182483355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182500575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182524575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182535801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182545688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182555288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182566524Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182677745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182691307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.183760 env[1112]: time="2023-10-02T19:31:22.182702168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.184120 env[1112]: time="2023-10-02T19:31:22.182711670Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:31:22.184120 env[1112]: time="2023-10-02T19:31:22.182724592Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:31:22.184120 env[1112]: time="2023-10-02T19:31:22.182733748Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:31:22.184120 env[1112]: time="2023-10-02T19:31:22.182758399Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:31:22.184120 env[1112]: time="2023-10-02T19:31:22.182791626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:31:22.184216 env[1112]: time="2023-10-02T19:31:22.182960477Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:31:22.184216 env[1112]: time="2023-10-02T19:31:22.183011425Z" level=info msg="Connect containerd service" Oct 2 19:31:22.184216 env[1112]: time="2023-10-02T19:31:22.183045124Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:31:22.186586 env[1112]: time="2023-10-02T19:31:22.185272613Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:31:22.186586 env[1112]: time="2023-10-02T19:31:22.185408504Z" level=info msg="Start subscribing containerd event" Oct 2 19:31:22.186586 env[1112]: time="2023-10-02T19:31:22.185473655Z" level=info msg="Start recovering state" Oct 2 19:31:22.186586 env[1112]: time="2023-10-02T19:31:22.185636503Z" level=info msg="Start event monitor" Oct 2 19:31:22.186586 env[1112]: time="2023-10-02T19:31:22.185665678Z" level=info msg="Start snapshots syncer" Oct 2 19:31:22.186586 env[1112]: time="2023-10-02T19:31:22.185681035Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:31:22.186586 env[1112]: time="2023-10-02T19:31:22.185690615Z" level=info msg="Start streaming server" Oct 2 19:31:22.186586 env[1112]: time="2023-10-02T19:31:22.186188682Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:31:22.186586 env[1112]: time="2023-10-02T19:31:22.186228807Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:31:22.186365 systemd[1]: Started containerd.service. Oct 2 19:31:22.186848 env[1112]: time="2023-10-02T19:31:22.186829627Z" level=info msg="containerd successfully booted in 0.063807s" Oct 2 19:31:22.197129 bash[1141]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:31:22.188690 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:31:22.210195 systemd-logind[1100]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:31:22.210463 systemd-logind[1100]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:31:22.212116 systemd-logind[1100]: New seat seat0. Oct 2 19:31:22.218567 systemd[1]: Started systemd-logind.service. Oct 2 19:31:22.221185 tar[1106]: ./vlan Oct 2 19:31:22.266729 tar[1106]: ./host-device Oct 2 19:31:22.332776 tar[1106]: ./tuning Oct 2 19:31:22.357414 locksmithd[1142]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:31:22.367372 tar[1106]: ./vrf Oct 2 19:31:22.435609 tar[1106]: ./sbr Oct 2 19:31:22.461672 tar[1106]: ./tap Oct 2 19:31:22.492706 tar[1106]: ./dhcp Oct 2 19:31:22.567433 tar[1106]: ./static Oct 2 19:31:22.588399 tar[1106]: ./firewall Oct 2 19:31:22.597276 systemd[1]: Finished prepare-critools.service. Oct 2 19:31:22.621194 tar[1106]: ./macvlan Oct 2 19:31:22.650393 tar[1106]: ./dummy Oct 2 19:31:22.679065 tar[1106]: ./bridge Oct 2 19:31:22.710355 tar[1106]: ./ipvlan Oct 2 19:31:22.739166 tar[1106]: ./portmap Oct 2 19:31:22.766706 tar[1106]: ./host-local Oct 2 19:31:22.800395 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:31:23.162683 systemd-networkd[1014]: eth0: Gained IPv6LL Oct 2 19:31:23.710131 sshd_keygen[1109]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:31:23.729876 systemd[1]: Finished sshd-keygen.service. Oct 2 19:31:23.732290 systemd[1]: Starting issuegen.service... Oct 2 19:31:23.737168 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:31:23.737303 systemd[1]: Finished issuegen.service. Oct 2 19:31:23.739113 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:31:23.744461 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:31:23.746232 systemd[1]: Started getty@tty1.service. Oct 2 19:31:23.747930 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:31:23.748741 systemd[1]: Reached target getty.target. Oct 2 19:31:23.749331 systemd[1]: Reached target multi-user.target. Oct 2 19:31:23.750819 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:31:23.756781 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:31:23.756897 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:31:23.757635 systemd[1]: Startup finished in 663ms (kernel) + 6.215s (initrd) + 6.801s (userspace) = 13.681s. Oct 2 19:31:25.322399 systemd[1]: Created slice system-sshd.slice. Oct 2 19:31:25.323419 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:57994.service. Oct 2 19:31:25.372394 sshd[1168]: Accepted publickey for core from 10.0.0.1 port 57994 ssh2: RSA SHA256:rt0kLJPhozfWQwmbrrsY5nEv7TGAszYkJtwNcrsaCus Oct 2 19:31:25.374244 sshd[1168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:31:25.382454 systemd[1]: Created slice user-500.slice. Oct 2 19:31:25.383599 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:31:25.385064 systemd-logind[1100]: New session 1 of user core. Oct 2 19:31:25.391224 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:31:25.392541 systemd[1]: Starting user@500.service... Oct 2 19:31:25.395182 (systemd)[1171]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:31:25.465986 systemd[1171]: Queued start job for default target default.target. Oct 2 19:31:25.466475 systemd[1171]: Reached target paths.target. Oct 2 19:31:25.466531 systemd[1171]: Reached target sockets.target. Oct 2 19:31:25.466549 systemd[1171]: Reached target timers.target. Oct 2 19:31:25.466560 systemd[1171]: Reached target basic.target. Oct 2 19:31:25.466598 systemd[1171]: Reached target default.target. Oct 2 19:31:25.466621 systemd[1171]: Startup finished in 66ms. Oct 2 19:31:25.466727 systemd[1]: Started user@500.service. Oct 2 19:31:25.467771 systemd[1]: Started session-1.scope. Oct 2 19:31:25.518543 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:58008.service. Oct 2 19:31:25.562688 sshd[1180]: Accepted publickey for core from 10.0.0.1 port 58008 ssh2: RSA SHA256:rt0kLJPhozfWQwmbrrsY5nEv7TGAszYkJtwNcrsaCus Oct 2 19:31:25.564099 sshd[1180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:31:25.567783 systemd-logind[1100]: New session 2 of user core. Oct 2 19:31:25.568564 systemd[1]: Started session-2.scope. Oct 2 19:31:25.621382 sshd[1180]: pam_unix(sshd:session): session closed for user core Oct 2 19:31:25.624448 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:58012.service. Oct 2 19:31:25.624890 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:58008.service: Deactivated successfully. Oct 2 19:31:25.625376 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:31:25.625860 systemd-logind[1100]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:31:25.626686 systemd-logind[1100]: Removed session 2. Oct 2 19:31:25.664876 sshd[1185]: Accepted publickey for core from 10.0.0.1 port 58012 ssh2: RSA SHA256:rt0kLJPhozfWQwmbrrsY5nEv7TGAszYkJtwNcrsaCus Oct 2 19:31:25.665849 sshd[1185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:31:25.669070 systemd-logind[1100]: New session 3 of user core. Oct 2 19:31:25.669826 systemd[1]: Started session-3.scope. Oct 2 19:31:25.718304 sshd[1185]: pam_unix(sshd:session): session closed for user core Oct 2 19:31:25.721227 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:58012.service: Deactivated successfully. Oct 2 19:31:25.721811 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:31:25.722290 systemd-logind[1100]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:31:25.723354 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:58022.service. Oct 2 19:31:25.723982 systemd-logind[1100]: Removed session 3. Oct 2 19:31:25.764158 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 58022 ssh2: RSA SHA256:rt0kLJPhozfWQwmbrrsY5nEv7TGAszYkJtwNcrsaCus Oct 2 19:31:25.765777 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:31:25.769058 systemd-logind[1100]: New session 4 of user core. Oct 2 19:31:25.769812 systemd[1]: Started session-4.scope. Oct 2 19:31:25.822913 sshd[1192]: pam_unix(sshd:session): session closed for user core Oct 2 19:31:25.826041 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:58022.service: Deactivated successfully. Oct 2 19:31:25.826541 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:31:25.827018 systemd-logind[1100]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:31:25.827831 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:58030.service. Oct 2 19:31:25.828687 systemd-logind[1100]: Removed session 4. Oct 2 19:31:25.867394 sshd[1198]: Accepted publickey for core from 10.0.0.1 port 58030 ssh2: RSA SHA256:rt0kLJPhozfWQwmbrrsY5nEv7TGAszYkJtwNcrsaCus Oct 2 19:31:25.868246 sshd[1198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:31:25.871487 systemd-logind[1100]: New session 5 of user core. Oct 2 19:31:25.872326 systemd[1]: Started session-5.scope. Oct 2 19:31:25.974783 sudo[1201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:31:25.974956 sudo[1201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:31:25.985686 dbus-daemon[1084]: \xd0\u000dp\u0013\xe5U: received setenforce notice (enforcing=1637465552) Oct 2 19:31:25.987580 sudo[1201]: pam_unix(sudo:session): session closed for user root Oct 2 19:31:25.989405 sshd[1198]: pam_unix(sshd:session): session closed for user core Oct 2 19:31:25.992044 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:58030.service: Deactivated successfully. Oct 2 19:31:25.992632 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:31:25.993111 systemd-logind[1100]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:31:25.994365 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:58038.service. Oct 2 19:31:25.994972 systemd-logind[1100]: Removed session 5. Oct 2 19:31:26.034251 sshd[1205]: Accepted publickey for core from 10.0.0.1 port 58038 ssh2: RSA SHA256:rt0kLJPhozfWQwmbrrsY5nEv7TGAszYkJtwNcrsaCus Oct 2 19:31:26.035130 sshd[1205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:31:26.037969 systemd-logind[1100]: New session 6 of user core. Oct 2 19:31:26.038729 systemd[1]: Started session-6.scope. Oct 2 19:31:26.089479 sudo[1209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:31:26.089661 sudo[1209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:31:26.091682 sudo[1209]: pam_unix(sudo:session): session closed for user root Oct 2 19:31:26.095210 sudo[1208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:31:26.095379 sudo[1208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:31:26.102625 systemd[1]: Stopping audit-rules.service... Oct 2 19:31:26.102000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:31:26.102000 audit[1212]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc2b19a1a0 a2=420 a3=0 items=0 ppid=1 pid=1212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:26.102000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:31:26.104139 auditctl[1212]: No rules Oct 2 19:31:26.104302 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:31:26.104430 systemd[1]: Stopped audit-rules.service. Oct 2 19:31:26.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.105700 systemd[1]: Starting audit-rules.service... Oct 2 19:31:26.120077 augenrules[1229]: No rules Oct 2 19:31:26.120681 systemd[1]: Finished audit-rules.service. Oct 2 19:31:26.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.121396 sudo[1208]: pam_unix(sudo:session): session closed for user root Oct 2 19:31:26.120000 audit[1208]: USER_END pid=1208 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.121000 audit[1208]: CRED_DISP pid=1208 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.122731 sshd[1205]: pam_unix(sshd:session): session closed for user core Oct 2 19:31:26.123000 audit[1205]: USER_END pid=1205 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:26.123000 audit[1205]: CRED_DISP pid=1205 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:26.125713 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:58038.service: Deactivated successfully. Oct 2 19:31:26.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.14:22-10.0.0.1:58038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.126274 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:31:26.126756 systemd-logind[1100]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:31:26.127593 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:58050.service. Oct 2 19:31:26.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.14:22-10.0.0.1:58050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.128203 systemd-logind[1100]: Removed session 6. Oct 2 19:31:26.166000 audit[1235]: USER_ACCT pid=1235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:26.168124 sshd[1235]: Accepted publickey for core from 10.0.0.1 port 58050 ssh2: RSA SHA256:rt0kLJPhozfWQwmbrrsY5nEv7TGAszYkJtwNcrsaCus Oct 2 19:31:26.167000 audit[1235]: CRED_ACQ pid=1235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:26.167000 audit[1235]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc38e0ca20 a2=3 a3=0 items=0 ppid=1 pid=1235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:26.167000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:31:26.169205 sshd[1235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:31:26.172531 systemd-logind[1100]: New session 7 of user core. Oct 2 19:31:26.173297 systemd[1]: Started session-7.scope. Oct 2 19:31:26.175000 audit[1235]: USER_START pid=1235 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:26.176000 audit[1237]: CRED_ACQ pid=1237 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:26.223000 audit[1238]: USER_ACCT pid=1238 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.223000 audit[1238]: CRED_REFR pid=1238 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.223889 sudo[1238]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:31:26.224049 sudo[1238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:31:26.225000 audit[1238]: USER_START pid=1238 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.744682 systemd[1]: Reloading. Oct 2 19:31:26.813130 /usr/lib/systemd/system-generators/torcx-generator[1268]: time="2023-10-02T19:31:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:31:26.813157 /usr/lib/systemd/system-generators/torcx-generator[1268]: time="2023-10-02T19:31:26Z" level=info msg="torcx already run" Oct 2 19:31:26.871622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:31:26.871638 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:31:26.891400 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:31:26.962953 kernel: kauditd_printk_skb: 24 callbacks suppressed Oct 2 19:31:26.963055 kernel: audit: type=1400 audit(1696275086.952:180): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.963077 kernel: audit: type=1400 audit(1696275086.952:181): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.963094 kernel: audit: type=1400 audit(1696275086.952:182): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.963107 kernel: audit: type=1400 audit(1696275086.952:183): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.963125 kernel: audit: type=1400 audit(1696275086.952:184): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.952000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.952000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.952000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.952000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.952000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.952000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.965212 kernel: audit: type=1400 audit(1696275086.952:185): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.965255 kernel: audit: type=1400 audit(1696275086.952:186): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.952000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.967126 kernel: audit: type=1400 audit(1696275086.952:187): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.952000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.968998 kernel: audit: type=1400 audit(1696275086.952:188): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.969041 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:31:26.952000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.954000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.954000 audit: BPF prog-id=34 op=LOAD Oct 2 19:31:26.954000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:31:26.956000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.956000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.956000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.956000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.956000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.956000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.956000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.956000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.956000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.958000 audit: BPF prog-id=35 op=LOAD Oct 2 19:31:26.958000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:31:26.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit: BPF prog-id=36 op=LOAD Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.962000 audit: BPF prog-id=37 op=LOAD Oct 2 19:31:26.962000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:31:26.962000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit: BPF prog-id=38 op=LOAD Oct 2 19:31:26.964000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.964000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.968000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.968000 audit: BPF prog-id=39 op=LOAD Oct 2 19:31:26.968000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.968000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.968000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.968000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.968000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.968000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.970000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.970000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.970000 audit: BPF prog-id=40 op=LOAD Oct 2 19:31:26.970000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:31:26.970000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit: BPF prog-id=41 op=LOAD Oct 2 19:31:26.971000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.971000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit: BPF prog-id=42 op=LOAD Oct 2 19:31:26.972000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit: BPF prog-id=43 op=LOAD Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.972000 audit: BPF prog-id=44 op=LOAD Oct 2 19:31:26.972000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:31:26.972000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.973000 audit: BPF prog-id=45 op=LOAD Oct 2 19:31:26.973000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit: BPF prog-id=46 op=LOAD Oct 2 19:31:26.974000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit: BPF prog-id=47 op=LOAD Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:26.974000 audit: BPF prog-id=48 op=LOAD Oct 2 19:31:26.974000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:31:26.974000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:31:26.981486 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:31:26.987077 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:31:26.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.987524 systemd[1]: Reached target network-online.target. Oct 2 19:31:26.989039 systemd[1]: Started kubelet.service. Oct 2 19:31:26.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:26.998321 systemd[1]: Starting coreos-metadata.service... Oct 2 19:31:27.003832 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:31:27.003988 systemd[1]: Finished coreos-metadata.service. Oct 2 19:31:27.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:27.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:27.043773 kubelet[1309]: E1002 19:31:27.043691 1309 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:31:27.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:31:27.045830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:31:27.045971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:31:27.248207 systemd[1]: Stopped kubelet.service. Oct 2 19:31:27.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:27.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:27.262350 systemd[1]: Reloading. Oct 2 19:31:27.332593 /usr/lib/systemd/system-generators/torcx-generator[1377]: time="2023-10-02T19:31:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:31:27.332626 /usr/lib/systemd/system-generators/torcx-generator[1377]: time="2023-10-02T19:31:27Z" level=info msg="torcx already run" Oct 2 19:31:27.395951 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:31:27.395986 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:31:27.414851 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.468000 audit: BPF prog-id=49 op=LOAD Oct 2 19:31:27.468000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit: BPF prog-id=50 op=LOAD Oct 2 19:31:27.470000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit: BPF prog-id=51 op=LOAD Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.471000 audit: BPF prog-id=52 op=LOAD Oct 2 19:31:27.471000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:31:27.471000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:31:27.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit: BPF prog-id=53 op=LOAD Oct 2 19:31:27.473000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit: BPF prog-id=54 op=LOAD Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit: BPF prog-id=55 op=LOAD Oct 2 19:31:27.473000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:31:27.473000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit: BPF prog-id=56 op=LOAD Oct 2 19:31:27.474000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.474000 audit: BPF prog-id=57 op=LOAD Oct 2 19:31:27.474000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:31:27.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit: BPF prog-id=58 op=LOAD Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.475000 audit: BPF prog-id=59 op=LOAD Oct 2 19:31:27.475000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:31:27.475000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit: BPF prog-id=60 op=LOAD Oct 2 19:31:27.476000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit: BPF prog-id=61 op=LOAD Oct 2 19:31:27.476000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit: BPF prog-id=62 op=LOAD Oct 2 19:31:27.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.477000 audit: BPF prog-id=63 op=LOAD Oct 2 19:31:27.477000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:31:27.477000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:31:27.488389 systemd[1]: Started kubelet.service. Oct 2 19:31:27.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:27.551297 kubelet[1417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:31:27.551297 kubelet[1417]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:31:27.551297 kubelet[1417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:31:27.551297 kubelet[1417]: I1002 19:31:27.549000 1417 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:31:27.855971 kubelet[1417]: I1002 19:31:27.855841 1417 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Oct 2 19:31:27.855971 kubelet[1417]: I1002 19:31:27.855885 1417 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:31:27.856132 kubelet[1417]: I1002 19:31:27.856116 1417 server.go:837] "Client rotation is on, will bootstrap in background" Oct 2 19:31:27.858780 kubelet[1417]: I1002 19:31:27.858737 1417 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:31:27.861979 kubelet[1417]: I1002 19:31:27.861958 1417 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:31:27.862195 kubelet[1417]: I1002 19:31:27.862180 1417 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:31:27.862254 kubelet[1417]: I1002 19:31:27.862244 1417 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:31:27.862351 kubelet[1417]: I1002 19:31:27.862265 1417 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:31:27.862351 kubelet[1417]: I1002 19:31:27.862279 1417 container_manager_linux.go:302] "Creating device plugin manager" Oct 2 19:31:27.862394 kubelet[1417]: I1002 19:31:27.862360 1417 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:31:27.865440 kubelet[1417]: I1002 19:31:27.865401 1417 kubelet.go:405] "Attempting to sync node with API server" Oct 2 19:31:27.865440 kubelet[1417]: I1002 19:31:27.865436 1417 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:31:27.865601 kubelet[1417]: I1002 19:31:27.865455 1417 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:31:27.865601 kubelet[1417]: I1002 19:31:27.865474 1417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:31:27.866154 kubelet[1417]: E1002 19:31:27.866119 1417 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:27.866262 kubelet[1417]: E1002 19:31:27.866220 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:27.866262 kubelet[1417]: I1002 19:31:27.866231 1417 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:31:27.866783 kubelet[1417]: W1002 19:31:27.866761 1417 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:31:27.867517 kubelet[1417]: I1002 19:31:27.867480 1417 server.go:1168] "Started kubelet" Oct 2 19:31:27.867676 kubelet[1417]: I1002 19:31:27.867644 1417 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:31:27.867676 kubelet[1417]: I1002 19:31:27.867652 1417 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:31:27.868700 kubelet[1417]: I1002 19:31:27.868675 1417 server.go:461] "Adding debug handlers to kubelet server" Oct 2 19:31:27.867000 audit[1417]: AVC avc: denied { mac_admin } for pid=1417 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.867000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:31:27.867000 audit[1417]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c51290 a1=c000f30720 a2=c000c51260 a3=25 items=0 ppid=1 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:27.867000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:31:27.867000 audit[1417]: AVC avc: denied { mac_admin } for pid=1417 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:27.867000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:31:27.867000 audit[1417]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0002bf4c0 a1=c000f30738 a2=c000c51320 a3=25 items=0 ppid=1 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:27.867000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:31:27.869649 kubelet[1417]: I1002 19:31:27.869242 1417 kubelet.go:1355] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:31:27.869649 kubelet[1417]: I1002 19:31:27.869290 1417 kubelet.go:1359] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:31:27.869649 kubelet[1417]: I1002 19:31:27.869376 1417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:31:27.869716 kubelet[1417]: E1002 19:31:27.869684 1417 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:31:27.869716 kubelet[1417]: E1002 19:31:27.869704 1417 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:31:27.871636 kubelet[1417]: I1002 19:31:27.871615 1417 volume_manager.go:284] "Starting Kubelet Volume Manager" Oct 2 19:31:27.872111 kubelet[1417]: E1002 19:31:27.872016 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613485af3c8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 867460748, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 867460748, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:27.872259 kubelet[1417]: W1002 19:31:27.872242 1417 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:31:27.872297 kubelet[1417]: E1002 19:31:27.872287 1417 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:31:27.872352 kubelet[1417]: W1002 19:31:27.872318 1417 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:31:27.872352 kubelet[1417]: E1002 19:31:27.872349 1417 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:31:27.872716 kubelet[1417]: I1002 19:31:27.872689 1417 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Oct 2 19:31:27.872883 kubelet[1417]: E1002 19:31:27.872815 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613485d15453", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 869695059, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 869695059, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:27.872958 kubelet[1417]: E1002 19:31:27.872906 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:27.874035 kubelet[1417]: W1002 19:31:27.874019 1417 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:31:27.874035 kubelet[1417]: E1002 19:31:27.874036 1417 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:31:27.874129 kubelet[1417]: E1002 19:31:27.874080 1417 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.14\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:31:27.888708 kubelet[1417]: I1002 19:31:27.888672 1417 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:31:27.888708 kubelet[1417]: I1002 19:31:27.888689 1417 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:31:27.888708 kubelet[1417]: I1002 19:31:27.888702 1417 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:31:27.889333 kubelet[1417]: E1002 19:31:27.889259 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e75876", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887915126, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887915126, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:27.890244 kubelet[1417]: E1002 19:31:27.890197 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e77cab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887924395, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887924395, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:27.890855 kubelet[1417]: E1002 19:31:27.890811 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e78598", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887926680, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887926680, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:27.893000 audit[1430]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:27.893000 audit[1430]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe29bdaa20 a2=0 a3=7ffe29bdaa0c items=0 ppid=1417 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:27.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:31:27.894000 audit[1436]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:27.894000 audit[1436]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd28ec6130 a2=0 a3=7ffd28ec611c items=0 ppid=1417 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:27.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:31:27.974220 kubelet[1417]: I1002 19:31:27.974170 1417 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Oct 2 19:31:27.975920 kubelet[1417]: E1002 19:31:27.975883 1417 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.14" Oct 2 19:31:27.976097 kubelet[1417]: E1002 19:31:27.975908 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e75876", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887915126, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 974083929, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.178a613486e75876" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:27.977355 kubelet[1417]: E1002 19:31:27.977252 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e77cab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887924395, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 974091232, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.178a613486e77cab" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:27.978342 kubelet[1417]: E1002 19:31:27.978279 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e78598", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887926680, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 974094609, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.178a613486e78598" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:28.076053 kubelet[1417]: E1002 19:31:28.076009 1417 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.14\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:31:28.176801 kubelet[1417]: I1002 19:31:28.176776 1417 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Oct 2 19:31:28.178168 kubelet[1417]: E1002 19:31:28.178139 1417 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.14" Oct 2 19:31:28.178318 kubelet[1417]: E1002 19:31:28.178133 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e75876", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887915126, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 28, 176729785, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.178a613486e75876" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:28.179026 kubelet[1417]: E1002 19:31:28.178965 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e77cab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887924395, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 28, 176740398, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.178a613486e77cab" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:28.179781 kubelet[1417]: E1002 19:31:28.179724 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e78598", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887926680, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 28, 176742964, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.178a613486e78598" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:28.289533 kubelet[1417]: I1002 19:31:28.289456 1417 policy_none.go:49] "None policy: Start" Oct 2 19:31:27.896000 audit[1438]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:27.896000 audit[1438]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdc2dde030 a2=0 a3=7ffdc2dde01c items=0 ppid=1417 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:27.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:31:28.290965 kubelet[1417]: I1002 19:31:28.290936 1417 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:31:28.291020 kubelet[1417]: I1002 19:31:28.290975 1417 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:31:28.292000 audit[1444]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:28.292000 audit[1444]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffdcce3960 a2=0 a3=7fffdcce394c items=0 ppid=1417 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:31:28.297964 systemd[1]: Created slice kubepods.slice. Oct 2 19:31:28.301803 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:31:28.304417 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:31:28.315214 kubelet[1417]: I1002 19:31:28.315181 1417 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:31:28.315324 kubelet[1417]: I1002 19:31:28.315265 1417 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:31:28.315561 kubelet[1417]: I1002 19:31:28.315487 1417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:31:28.314000 audit[1417]: AVC avc: denied { mac_admin } for pid=1417 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:28.314000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:31:28.314000 audit[1417]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000332b10 a1=c000d500f0 a2=c000332ae0 a3=25 items=0 ppid=1 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.314000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:31:28.316519 kubelet[1417]: E1002 19:31:28.316489 1417 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.14\" not found" Oct 2 19:31:28.317525 kubelet[1417]: E1002 19:31:28.317403 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a6134a06da360", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 28, 316146528, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 28, 316146528, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:28.329000 audit[1449]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:28.329000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffda1aac470 a2=0 a3=7ffda1aac45c items=0 ppid=1417 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:31:28.330292 kubelet[1417]: I1002 19:31:28.330272 1417 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:31:28.330000 audit[1450]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:28.330000 audit[1450]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcef6ebe80 a2=0 a3=7ffcef6ebe6c items=0 ppid=1417 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:31:28.330000 audit[1451]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:28.330000 audit[1451]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3f6f1d20 a2=0 a3=7ffe3f6f1d0c items=0 ppid=1417 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:31:28.331648 kubelet[1417]: I1002 19:31:28.331395 1417 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:31:28.331648 kubelet[1417]: I1002 19:31:28.331424 1417 status_manager.go:207] "Starting to sync pod status with apiserver" Oct 2 19:31:28.331648 kubelet[1417]: I1002 19:31:28.331458 1417 kubelet.go:2257] "Starting kubelet main sync loop" Oct 2 19:31:28.331648 kubelet[1417]: E1002 19:31:28.331529 1417 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:31:28.331000 audit[1452]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=1452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:28.331000 audit[1452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1bc428c0 a2=0 a3=7ffe1bc428ac items=0 ppid=1417 pid=1452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:31:28.332000 audit[1453]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:28.332000 audit[1453]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffdf8d91f20 a2=0 a3=7ffdf8d91f0c items=0 ppid=1417 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:31:28.332958 kubelet[1417]: W1002 19:31:28.332934 1417 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:31:28.333088 kubelet[1417]: E1002 19:31:28.333050 1417 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:31:28.332000 audit[1454]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=1454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:28.332000 audit[1454]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffccaf7cfc0 a2=0 a3=7ffccaf7cfac items=0 ppid=1417 pid=1454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:31:28.333000 audit[1455]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:28.333000 audit[1455]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdf0f3fd0 a2=0 a3=7ffcdf0f3fbc items=0 ppid=1417 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:31:28.333000 audit[1456]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:28.333000 audit[1456]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff67270bc0 a2=0 a3=7fff67270bac items=0 ppid=1417 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:28.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:31:28.478067 kubelet[1417]: E1002 19:31:28.477924 1417 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.14\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:31:28.579261 kubelet[1417]: I1002 19:31:28.579215 1417 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Oct 2 19:31:28.580717 kubelet[1417]: E1002 19:31:28.580630 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e75876", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887915126, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 28, 579137170, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.178a613486e75876" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:28.580980 kubelet[1417]: E1002 19:31:28.580696 1417 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.14" Oct 2 19:31:28.581718 kubelet[1417]: E1002 19:31:28.581616 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e77cab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887924395, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 28, 579183798, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.178a613486e77cab" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:28.582649 kubelet[1417]: E1002 19:31:28.582594 1417 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.14.178a613486e78598", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.14", UID:"10.0.0.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.14"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 31, 27, 887926680, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 28, 579186313, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.14.178a613486e78598" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:28.833183 kubelet[1417]: W1002 19:31:28.833057 1417 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:31:28.833183 kubelet[1417]: E1002 19:31:28.833094 1417 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:31:28.860295 kubelet[1417]: I1002 19:31:28.860263 1417 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:31:28.866394 kubelet[1417]: E1002 19:31:28.866371 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:29.236252 kubelet[1417]: E1002 19:31:29.236204 1417 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.14" not found Oct 2 19:31:29.281884 kubelet[1417]: E1002 19:31:29.281837 1417 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.14\" not found" node="10.0.0.14" Oct 2 19:31:29.391449 kubelet[1417]: I1002 19:31:29.391384 1417 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.14" Oct 2 19:31:29.395201 kubelet[1417]: I1002 19:31:29.395159 1417 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.14" Oct 2 19:31:29.403274 kubelet[1417]: E1002 19:31:29.403211 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:29.504226 kubelet[1417]: E1002 19:31:29.503871 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:29.506125 sudo[1238]: pam_unix(sudo:session): session closed for user root Oct 2 19:31:29.505000 audit[1238]: USER_END pid=1238 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:29.505000 audit[1238]: CRED_DISP pid=1238 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:29.507580 sshd[1235]: pam_unix(sshd:session): session closed for user core Oct 2 19:31:29.508000 audit[1235]: USER_END pid=1235 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:29.508000 audit[1235]: CRED_DISP pid=1235 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:29.509902 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:58050.service: Deactivated successfully. Oct 2 19:31:29.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.14:22-10.0.0.1:58050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:29.510662 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:31:29.511201 systemd-logind[1100]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:31:29.511989 systemd-logind[1100]: Removed session 7. Oct 2 19:31:29.604345 kubelet[1417]: E1002 19:31:29.604283 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:29.705004 kubelet[1417]: E1002 19:31:29.704933 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:29.805631 kubelet[1417]: E1002 19:31:29.805486 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:29.866969 kubelet[1417]: E1002 19:31:29.866910 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:29.906146 kubelet[1417]: E1002 19:31:29.906091 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.006819 kubelet[1417]: E1002 19:31:30.006723 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.107533 kubelet[1417]: E1002 19:31:30.107362 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.208059 kubelet[1417]: E1002 19:31:30.207991 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.308767 kubelet[1417]: E1002 19:31:30.308682 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.409832 kubelet[1417]: E1002 19:31:30.409689 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.510389 kubelet[1417]: E1002 19:31:30.510315 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.611174 kubelet[1417]: E1002 19:31:30.611094 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.711811 kubelet[1417]: E1002 19:31:30.711748 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.812539 kubelet[1417]: E1002 19:31:30.812429 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:30.867903 kubelet[1417]: E1002 19:31:30.867843 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:30.913316 kubelet[1417]: E1002 19:31:30.913275 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:31.013898 kubelet[1417]: E1002 19:31:31.013767 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:31.114305 kubelet[1417]: E1002 19:31:31.114256 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:31.215707 kubelet[1417]: E1002 19:31:31.215381 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:31.316524 kubelet[1417]: E1002 19:31:31.316076 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:31.416664 kubelet[1417]: E1002 19:31:31.416606 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:31.517220 kubelet[1417]: E1002 19:31:31.517164 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:31.617728 kubelet[1417]: E1002 19:31:31.617607 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:31.718099 kubelet[1417]: E1002 19:31:31.718047 1417 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.14\" not found" Oct 2 19:31:31.819341 kubelet[1417]: I1002 19:31:31.819296 1417 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:31:31.819762 env[1112]: time="2023-10-02T19:31:31.819691325Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:31:31.820325 kubelet[1417]: I1002 19:31:31.820274 1417 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:31:31.868484 kubelet[1417]: I1002 19:31:31.868283 1417 apiserver.go:52] "Watching apiserver" Oct 2 19:31:31.868484 kubelet[1417]: E1002 19:31:31.868329 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:31.871384 kubelet[1417]: I1002 19:31:31.871359 1417 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:31:31.871481 kubelet[1417]: I1002 19:31:31.871464 1417 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:31:31.874216 kubelet[1417]: I1002 19:31:31.874182 1417 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Oct 2 19:31:31.879342 systemd[1]: Created slice kubepods-besteffort-pod00e6752f_c69f_4e18_9e1f_3d20110f08a3.slice. Oct 2 19:31:31.892642 systemd[1]: Created slice kubepods-burstable-podfe32c401_8569_4283_914d_545fbac827dd.slice. Oct 2 19:31:31.894745 kubelet[1417]: I1002 19:31:31.894713 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00e6752f-c69f-4e18-9e1f-3d20110f08a3-xtables-lock\") pod \"kube-proxy-hk9v2\" (UID: \"00e6752f-c69f-4e18-9e1f-3d20110f08a3\") " pod="kube-system/kube-proxy-hk9v2" Oct 2 19:31:31.894841 kubelet[1417]: I1002 19:31:31.894777 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m49fv\" (UniqueName: \"kubernetes.io/projected/00e6752f-c69f-4e18-9e1f-3d20110f08a3-kube-api-access-m49fv\") pod \"kube-proxy-hk9v2\" (UID: \"00e6752f-c69f-4e18-9e1f-3d20110f08a3\") " pod="kube-system/kube-proxy-hk9v2" Oct 2 19:31:31.894841 kubelet[1417]: I1002 19:31:31.894810 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cilium-run\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.894841 kubelet[1417]: I1002 19:31:31.894836 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cni-path\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.894971 kubelet[1417]: I1002 19:31:31.894873 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-host-proc-sys-kernel\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.894971 kubelet[1417]: I1002 19:31:31.894907 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8f65\" (UniqueName: \"kubernetes.io/projected/fe32c401-8569-4283-914d-545fbac827dd-kube-api-access-l8f65\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.894971 kubelet[1417]: I1002 19:31:31.894941 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-bpf-maps\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895096 kubelet[1417]: I1002 19:31:31.894979 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-hostproc\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895096 kubelet[1417]: I1002 19:31:31.895012 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-lib-modules\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895096 kubelet[1417]: I1002 19:31:31.895048 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-xtables-lock\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895096 kubelet[1417]: I1002 19:31:31.895076 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe32c401-8569-4283-914d-545fbac827dd-clustermesh-secrets\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895238 kubelet[1417]: I1002 19:31:31.895117 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00e6752f-c69f-4e18-9e1f-3d20110f08a3-lib-modules\") pod \"kube-proxy-hk9v2\" (UID: \"00e6752f-c69f-4e18-9e1f-3d20110f08a3\") " pod="kube-system/kube-proxy-hk9v2" Oct 2 19:31:31.895238 kubelet[1417]: I1002 19:31:31.895173 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-etc-cni-netd\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895238 kubelet[1417]: I1002 19:31:31.895195 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe32c401-8569-4283-914d-545fbac827dd-hubble-tls\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895238 kubelet[1417]: I1002 19:31:31.895217 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00e6752f-c69f-4e18-9e1f-3d20110f08a3-kube-proxy\") pod \"kube-proxy-hk9v2\" (UID: \"00e6752f-c69f-4e18-9e1f-3d20110f08a3\") " pod="kube-system/kube-proxy-hk9v2" Oct 2 19:31:31.895238 kubelet[1417]: I1002 19:31:31.895239 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cilium-cgroup\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895406 kubelet[1417]: I1002 19:31:31.895264 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe32c401-8569-4283-914d-545fbac827dd-cilium-config-path\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895406 kubelet[1417]: I1002 19:31:31.895287 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-host-proc-sys-net\") pod \"cilium-q5m6r\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " pod="kube-system/cilium-q5m6r" Oct 2 19:31:31.895406 kubelet[1417]: I1002 19:31:31.895304 1417 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:31:32.190481 kubelet[1417]: E1002 19:31:32.190440 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:32.191180 env[1112]: time="2023-10-02T19:31:32.191132334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk9v2,Uid:00e6752f-c69f-4e18-9e1f-3d20110f08a3,Namespace:kube-system,Attempt:0,}" Oct 2 19:31:32.200295 kubelet[1417]: E1002 19:31:32.200269 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:32.200641 env[1112]: time="2023-10-02T19:31:32.200599198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q5m6r,Uid:fe32c401-8569-4283-914d-545fbac827dd,Namespace:kube-system,Attempt:0,}" Oct 2 19:31:32.868801 kubelet[1417]: E1002 19:31:32.868744 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:33.686657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55646773.mount: Deactivated successfully. Oct 2 19:31:33.743048 env[1112]: time="2023-10-02T19:31:33.742976248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:33.744031 env[1112]: time="2023-10-02T19:31:33.744005941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:33.746750 env[1112]: time="2023-10-02T19:31:33.746723474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:33.748002 env[1112]: time="2023-10-02T19:31:33.747970153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:33.749179 env[1112]: time="2023-10-02T19:31:33.749147552Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:33.750522 env[1112]: time="2023-10-02T19:31:33.750455677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:33.751758 env[1112]: time="2023-10-02T19:31:33.751727480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:33.753011 env[1112]: time="2023-10-02T19:31:33.752981477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:33.774480 env[1112]: time="2023-10-02T19:31:33.774402019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:31:33.774480 env[1112]: time="2023-10-02T19:31:33.774447036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:31:33.775109 env[1112]: time="2023-10-02T19:31:33.774460231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:31:33.775172 env[1112]: time="2023-10-02T19:31:33.775109682Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126 pid=1475 runtime=io.containerd.runc.v2 Oct 2 19:31:33.779014 env[1112]: time="2023-10-02T19:31:33.778926489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:31:33.779014 env[1112]: time="2023-10-02T19:31:33.778964239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:31:33.779014 env[1112]: time="2023-10-02T19:31:33.778994133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:31:33.779382 env[1112]: time="2023-10-02T19:31:33.779339780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f6f72ee9679b7ceceabe2e9fa4de27bcc55b101288f7b8f79af6e3a9ca54431 pid=1486 runtime=io.containerd.runc.v2 Oct 2 19:31:33.793824 systemd[1]: Started cri-containerd-1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126.scope. Oct 2 19:31:33.820246 systemd[1]: Started cri-containerd-6f6f72ee9679b7ceceabe2e9fa4de27bcc55b101288f7b8f79af6e3a9ca54431.scope. Oct 2 19:31:33.853870 kernel: kauditd_printk_skb: 397 callbacks suppressed Oct 2 19:31:33.854036 kernel: audit: type=1400 audit(1696275093.847:551): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.854076 kernel: audit: type=1400 audit(1696275093.847:552): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.854094 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:31:33.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.855543 kernel: audit: type=1400 audit(1696275093.847:553): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.855666 kernel: audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:31:33.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.856518 kernel: audit: type=1400 audit(1696275093.847:554): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.857135 kernel: audit: backlog limit exceeded Oct 2 19:31:33.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.861266 kernel: audit: type=1400 audit(1696275093.847:555): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.863208 kernel: audit: type=1400 audit(1696275093.847:556): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.865143 kernel: audit: type=1400 audit(1696275093.848:557): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit: BPF prog-id=64 op=LOAD Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1475 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138333766396433356638373239666337313032333066346133633435 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1475 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138333766396433356638373239666337313032333066346133633435 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.850000 audit: BPF prog-id=65 op=LOAD Oct 2 19:31:33.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit: BPF prog-id=66 op=LOAD Oct 2 19:31:33.850000 audit[1496]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0000246b0 items=0 ppid=1475 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138333766396433356638373239666337313032333066346133633435 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit: BPF prog-id=67 op=LOAD Oct 2 19:31:33.852000 audit[1496]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0000246f8 items=0 ppid=1475 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138333766396433356638373239666337313032333066346133633435 Oct 2 19:31:33.852000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:31:33.852000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { perfmon } for pid=1496 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit[1496]: AVC avc: denied { bpf } for pid=1496 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.852000 audit: BPF prog-id=68 op=LOAD Oct 2 19:31:33.852000 audit[1496]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000024b08 items=0 ppid=1475 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138333766396433356638373239666337313032333066346133633435 Oct 2 19:31:33.852000 audit[1500]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001c5c48 a2=10 a3=1c items=0 ppid=1486 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366637326565393637396237636563656162653265396661346465 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001c56b0 a2=3c a3=c items=0 ppid=1486 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366637326565393637396237636563656162653265396661346465 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit: BPF prog-id=69 op=LOAD Oct 2 19:31:33.864000 audit[1500]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001c59d8 a2=78 a3=c000290360 items=0 ppid=1486 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366637326565393637396237636563656162653265396661346465 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit: BPF prog-id=70 op=LOAD Oct 2 19:31:33.864000 audit[1500]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001c5770 a2=78 a3=c0002903a8 items=0 ppid=1486 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366637326565393637396237636563656162653265396661346465 Oct 2 19:31:33.864000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:31:33.864000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { perfmon } for pid=1500 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit[1500]: AVC avc: denied { bpf } for pid=1500 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:33.864000 audit: BPF prog-id=71 op=LOAD Oct 2 19:31:33.864000 audit[1500]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001c5c30 a2=78 a3=c0002907b8 items=0 ppid=1486 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:33.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666366637326565393637396237636563656162653265396661346465 Oct 2 19:31:33.869410 kubelet[1417]: E1002 19:31:33.869363 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:33.881878 env[1112]: time="2023-10-02T19:31:33.881792589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk9v2,Uid:00e6752f-c69f-4e18-9e1f-3d20110f08a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f6f72ee9679b7ceceabe2e9fa4de27bcc55b101288f7b8f79af6e3a9ca54431\"" Oct 2 19:31:33.882082 env[1112]: time="2023-10-02T19:31:33.881921060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q5m6r,Uid:fe32c401-8569-4283-914d-545fbac827dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\"" Oct 2 19:31:33.882901 kubelet[1417]: E1002 19:31:33.882858 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:33.883349 kubelet[1417]: E1002 19:31:33.883321 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:33.884483 env[1112]: time="2023-10-02T19:31:33.884440740Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:31:34.869712 kubelet[1417]: E1002 19:31:34.869648 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.870315 kubelet[1417]: E1002 19:31:35.870237 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:36.870738 kubelet[1417]: E1002 19:31:36.870679 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:37.871293 kubelet[1417]: E1002 19:31:37.871240 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:38.872052 kubelet[1417]: E1002 19:31:38.871979 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:39.872714 kubelet[1417]: E1002 19:31:39.872661 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:40.873865 kubelet[1417]: E1002 19:31:40.873797 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:41.874117 kubelet[1417]: E1002 19:31:41.874076 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:42.874918 kubelet[1417]: E1002 19:31:42.874852 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:43.875465 kubelet[1417]: E1002 19:31:43.875399 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.634641 env[1112]: time="2023-10-02T19:31:44.634475131Z" level=error msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" failed" error="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 504 Gateway Time-out" Oct 2 19:31:44.635071 kubelet[1417]: E1002 19:31:44.634924 1417 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 504 Gateway Time-out" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Oct 2 19:31:44.635071 kubelet[1417]: E1002 19:31:44.634974 1417 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 504 Gateway Time-out" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Oct 2 19:31:44.635211 kubelet[1417]: E1002 19:31:44.635186 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:44.635211 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:44.635211 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:31:44.635287 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8f65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 504 Gateway Time-out Oct 2 19:31:44.635287 kubelet[1417]: E1002 19:31:44.635258 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 504 Gateway Time-out\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:31:44.635539 env[1112]: time="2023-10-02T19:31:44.635492937Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\"" Oct 2 19:31:44.876320 kubelet[1417]: E1002 19:31:44.876251 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:45.366542 kubelet[1417]: E1002 19:31:45.366118 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:45.367169 kubelet[1417]: E1002 19:31:45.367138 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:31:45.682148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668056457.mount: Deactivated successfully. Oct 2 19:31:45.877448 kubelet[1417]: E1002 19:31:45.877379 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:46.230301 env[1112]: time="2023-10-02T19:31:46.230215084Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:46.232101 env[1112]: time="2023-10-02T19:31:46.232049190Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:46.233524 env[1112]: time="2023-10-02T19:31:46.233441327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:46.234636 env[1112]: time="2023-10-02T19:31:46.234581217Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8e9eff2f6d0b398f9ac5f5a15c1cb7d5f468f28d64a78d593d57f72a969a54ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:46.235122 env[1112]: time="2023-10-02T19:31:46.235087014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\" returns image reference \"sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985\"" Oct 2 19:31:46.237222 env[1112]: time="2023-10-02T19:31:46.237186863Z" level=info msg="CreateContainer within sandbox \"6f6f72ee9679b7ceceabe2e9fa4de27bcc55b101288f7b8f79af6e3a9ca54431\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:31:46.255728 env[1112]: time="2023-10-02T19:31:46.255679058Z" level=info msg="CreateContainer within sandbox \"6f6f72ee9679b7ceceabe2e9fa4de27bcc55b101288f7b8f79af6e3a9ca54431\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b1a2afb7ac0f600da8e514f1019f5040522a5475c340c298de0a486b7af9c97\"" Oct 2 19:31:46.256289 env[1112]: time="2023-10-02T19:31:46.256256062Z" level=info msg="StartContainer for \"7b1a2afb7ac0f600da8e514f1019f5040522a5475c340c298de0a486b7af9c97\"" Oct 2 19:31:46.271858 systemd[1]: Started cri-containerd-7b1a2afb7ac0f600da8e514f1019f5040522a5475c340c298de0a486b7af9c97.scope. Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.289542 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:31:46.289662 kernel: audit: type=1400 audit(1696275106.284:587): avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.289704 kernel: audit: type=1300 audit(1696275106.284:587): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1486 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.284000 audit[1548]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1486 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316132616662376163306636303064613865353134663130313966 Oct 2 19:31:46.293477 kernel: audit: type=1327 audit(1696275106.284:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316132616662376163306636303064613865353134663130313966 Oct 2 19:31:46.293557 kernel: audit: type=1400 audit(1696275106.284:588): avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.297206 kernel: audit: type=1400 audit(1696275106.284:588): avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.297253 kernel: audit: type=1400 audit(1696275106.284:588): avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.303090 kernel: audit: type=1400 audit(1696275106.284:588): avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.303225 kernel: audit: type=1400 audit(1696275106.284:588): avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.305466 kernel: audit: type=1400 audit(1696275106.284:588): avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.284000 audit: BPF prog-id=72 op=LOAD Oct 2 19:31:46.284000 audit[1548]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0000dfbe0 items=0 ppid=1486 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316132616662376163306636303064613865353134663130313966 Oct 2 19:31:46.287000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.287000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.287000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.287000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.287000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.287000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.287000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.287000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.287000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.287000 audit: BPF prog-id=73 op=LOAD Oct 2 19:31:46.287000 audit[1548]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0000dfc28 items=0 ppid=1486 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316132616662376163306636303064613865353134663130313966 Oct 2 19:31:46.290000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:31:46.290000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.308535 kernel: audit: type=1400 audit(1696275106.284:588): avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { perfmon } for pid=1548 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit[1548]: AVC avc: denied { bpf } for pid=1548 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:46.290000 audit: BPF prog-id=74 op=LOAD Oct 2 19:31:46.290000 audit[1548]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0000dfcb8 items=0 ppid=1486 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762316132616662376163306636303064613865353134663130313966 Oct 2 19:31:46.312923 env[1112]: time="2023-10-02T19:31:46.312883490Z" level=info msg="StartContainer for \"7b1a2afb7ac0f600da8e514f1019f5040522a5475c340c298de0a486b7af9c97\" returns successfully" Oct 2 19:31:46.358000 audit[1599]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.358000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed62dd0a0 a2=0 a3=7ffed62dd08c items=0 ppid=1559 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:31:46.358000 audit[1600]: NETFILTER_CFG table=mangle:15 family=10 entries=1 op=nft_register_chain pid=1600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.358000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe17503400 a2=0 a3=7ffe175033ec items=0 ppid=1559 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:31:46.359000 audit[1601]: NETFILTER_CFG table=nat:16 family=10 entries=1 op=nft_register_chain pid=1601 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.359000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1dc99150 a2=0 a3=7ffe1dc9913c items=0 ppid=1559 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:31:46.360000 audit[1602]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_chain pid=1602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.360000 audit[1602]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2e4b1790 a2=0 a3=7fff2e4b177c items=0 ppid=1559 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:31:46.362000 audit[1603]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1603 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.362000 audit[1603]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe01f00090 a2=0 a3=7ffe01f0007c items=0 ppid=1559 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:31:46.363000 audit[1605]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_chain pid=1605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.363000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3a922fd0 a2=0 a3=7ffc3a922fbc items=0 ppid=1559 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:31:46.369633 kubelet[1417]: E1002 19:31:46.369613 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:46.377766 kubelet[1417]: I1002 19:31:46.377734 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hk9v2" podStartSLOduration=5.026623869 podCreationTimestamp="2023-10-02 19:31:29 +0000 UTC" firstStartedPulling="2023-10-02 19:31:33.884360879 +0000 UTC m=+6.393501890" lastFinishedPulling="2023-10-02 19:31:46.235430022 +0000 UTC m=+18.744571033" observedRunningTime="2023-10-02 19:31:46.377575829 +0000 UTC m=+18.886716840" watchObservedRunningTime="2023-10-02 19:31:46.377693012 +0000 UTC m=+18.886834023" Oct 2 19:31:46.464000 audit[1606]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.464000 audit[1606]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdcc6d5d20 a2=0 a3=7ffdcc6d5d0c items=0 ppid=1559 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:31:46.466000 audit[1608]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.466000 audit[1608]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff2e4f4510 a2=0 a3=7fff2e4f44fc items=0 ppid=1559 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:31:46.469000 audit[1611]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.469000 audit[1611]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff15a87eb0 a2=0 a3=7fff15a87e9c items=0 ppid=1559 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:31:46.470000 audit[1612]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.470000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6d3c51b0 a2=0 a3=7ffd6d3c519c items=0 ppid=1559 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:31:46.472000 audit[1614]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.472000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd3c77a740 a2=0 a3=7ffd3c77a72c items=0 ppid=1559 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:31:46.473000 audit[1615]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.473000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdda6a8dd0 a2=0 a3=7ffdda6a8dbc items=0 ppid=1559 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:31:46.475000 audit[1617]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.475000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce2bc4690 a2=0 a3=7ffce2bc467c items=0 ppid=1559 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:31:46.478000 audit[1620]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.478000 audit[1620]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdbde44e80 a2=0 a3=7ffdbde44e6c items=0 ppid=1559 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:31:46.479000 audit[1621]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.479000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc7743990 a2=0 a3=7ffcc774397c items=0 ppid=1559 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.479000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:31:46.481000 audit[1623]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.481000 audit[1623]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3c6d6fb0 a2=0 a3=7ffc3c6d6f9c items=0 ppid=1559 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:31:46.482000 audit[1624]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1624 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.482000 audit[1624]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd180d1d90 a2=0 a3=7ffd180d1d7c items=0 ppid=1559 pid=1624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:31:46.484000 audit[1626]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.484000 audit[1626]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd29fad280 a2=0 a3=7ffd29fad26c items=0 ppid=1559 pid=1626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:31:46.488000 audit[1629]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.488000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4d9ed670 a2=0 a3=7fff4d9ed65c items=0 ppid=1559 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:31:46.491000 audit[1632]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.491000 audit[1632]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd60d1ebf0 a2=0 a3=7ffd60d1ebdc items=0 ppid=1559 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:31:46.491000 audit[1633]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.491000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffd233c290 a2=0 a3=7fffd233c27c items=0 ppid=1559 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:31:46.493000 audit[1635]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.493000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffef839cb20 a2=0 a3=7ffef839cb0c items=0 ppid=1559 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:46.515000 audit[1641]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.515000 audit[1641]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff40e1bf10 a2=0 a3=7fff40e1befc items=0 ppid=1559 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:46.520000 audit[1646]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.520000 audit[1646]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaffa59f0 a2=0 a3=7ffeaffa59dc items=0 ppid=1559 pid=1646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.520000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:31:46.522000 audit[1648]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:46.522000 audit[1648]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcbd4bc430 a2=0 a3=7ffcbd4bc41c items=0 ppid=1559 pid=1648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.522000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:31:46.530000 audit[1650]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:31:46.530000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffe03685070 a2=0 a3=7ffe0368505c items=0 ppid=1559 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:46.547000 audit[1650]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:31:46.547000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe03685070 a2=0 a3=7ffe0368505c items=0 ppid=1559 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:46.548000 audit[1656]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1656 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.548000 audit[1656]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff9aeeaee0 a2=0 a3=7fff9aeeaecc items=0 ppid=1559 pid=1656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:31:46.550000 audit[1658]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1658 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.550000 audit[1658]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc37cab280 a2=0 a3=7ffc37cab26c items=0 ppid=1559 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:31:46.555000 audit[1661]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1661 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.555000 audit[1661]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc643806f0 a2=0 a3=7ffc643806dc items=0 ppid=1559 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:31:46.556000 audit[1662]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.556000 audit[1662]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5dd6d400 a2=0 a3=7ffd5dd6d3ec items=0 ppid=1559 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:31:46.559000 audit[1664]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1664 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.559000 audit[1664]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5a9f51c0 a2=0 a3=7ffc5a9f51ac items=0 ppid=1559 pid=1664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:31:46.560000 audit[1665]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1665 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.560000 audit[1665]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9ab62530 a2=0 a3=7ffc9ab6251c items=0 ppid=1559 pid=1665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:31:46.564000 audit[1667]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1667 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.564000 audit[1667]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffd4dcb970 a2=0 a3=7fffd4dcb95c items=0 ppid=1559 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:31:46.567000 audit[1670]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1670 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.567000 audit[1670]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffca874c870 a2=0 a3=7ffca874c85c items=0 ppid=1559 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:31:46.568000 audit[1671]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.568000 audit[1671]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc735cdf0 a2=0 a3=7fffc735cddc items=0 ppid=1559 pid=1671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:31:46.571000 audit[1673]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1673 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.571000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc83f5daa0 a2=0 a3=7ffc83f5da8c items=0 ppid=1559 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.571000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:31:46.573000 audit[1674]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.573000 audit[1674]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7dd1aa00 a2=0 a3=7ffd7dd1a9ec items=0 ppid=1559 pid=1674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:31:46.576000 audit[1676]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1676 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.576000 audit[1676]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd23383b00 a2=0 a3=7ffd23383aec items=0 ppid=1559 pid=1676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.576000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:31:46.579000 audit[1679]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1679 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.579000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd289be3b0 a2=0 a3=7ffd289be39c items=0 ppid=1559 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:31:46.582000 audit[1682]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1682 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.582000 audit[1682]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdfbfc9e90 a2=0 a3=7ffdfbfc9e7c items=0 ppid=1559 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:31:46.583000 audit[1683]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1683 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.583000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff92a095e0 a2=0 a3=7fff92a095cc items=0 ppid=1559 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:31:46.585000 audit[1685]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1685 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.585000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc27c1f950 a2=0 a3=7ffc27c1f93c items=0 ppid=1559 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:46.588000 audit[1688]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1688 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.588000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe817dac70 a2=0 a3=7ffe817dac5c items=0 ppid=1559 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:46.589000 audit[1689]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.589000 audit[1689]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9b1168f0 a2=0 a3=7ffe9b1168dc items=0 ppid=1559 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:31:46.590000 audit[1691]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_rule pid=1691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.590000 audit[1691]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe3f3ca8c0 a2=0 a3=7ffe3f3ca8ac items=0 ppid=1559 pid=1691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:31:46.593000 audit[1694]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_rule pid=1694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.593000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdaaf50d80 a2=0 a3=7ffdaaf50d6c items=0 ppid=1559 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:31:46.594000 audit[1695]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=1695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.594000 audit[1695]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc82254420 a2=0 a3=7ffc8225440c items=0 ppid=1559 pid=1695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.594000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:31:46.596000 audit[1697]: NETFILTER_CFG table=nat:62 family=10 entries=2 op=nft_register_chain pid=1697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:46.596000 audit[1697]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff4a9e62a0 a2=0 a3=7fff4a9e628c items=0 ppid=1559 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:31:46.598000 audit[1699]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:31:46.598000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff6980fce0 a2=0 a3=7fff6980fccc items=0 ppid=1559 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.598000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:46.599000 audit[1699]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:31:46.599000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7fff6980fce0 a2=0 a3=7fff6980fccc items=0 ppid=1559 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:46.599000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:46.877993 kubelet[1417]: E1002 19:31:46.877863 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:47.371579 kubelet[1417]: E1002 19:31:47.371542 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:47.865770 kubelet[1417]: E1002 19:31:47.865717 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:47.878893 kubelet[1417]: E1002 19:31:47.878869 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:48.878998 kubelet[1417]: E1002 19:31:48.878952 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:49.879821 kubelet[1417]: E1002 19:31:49.879772 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:50.879945 kubelet[1417]: E1002 19:31:50.879899 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:51.880607 kubelet[1417]: E1002 19:31:51.880545 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:52.881485 kubelet[1417]: E1002 19:31:52.881432 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:53.882187 kubelet[1417]: E1002 19:31:53.882156 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:54.883106 kubelet[1417]: E1002 19:31:54.883058 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:55.883859 kubelet[1417]: E1002 19:31:55.883820 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:56.884855 kubelet[1417]: E1002 19:31:56.884793 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:57.885893 kubelet[1417]: E1002 19:31:57.885840 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:58.886322 kubelet[1417]: E1002 19:31:58.886266 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:59.886629 kubelet[1417]: E1002 19:31:59.886560 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:00.332763 kubelet[1417]: E1002 19:32:00.332731 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:00.333700 env[1112]: time="2023-10-02T19:32:00.333662014Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:32:00.887352 kubelet[1417]: E1002 19:32:00.887302 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:01.888153 kubelet[1417]: E1002 19:32:01.888112 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:02.889134 kubelet[1417]: E1002 19:32:02.889077 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:03.890000 kubelet[1417]: E1002 19:32:03.889944 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.890767 kubelet[1417]: E1002 19:32:04.890716 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:05.891091 kubelet[1417]: E1002 19:32:05.891021 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:06.891621 kubelet[1417]: E1002 19:32:06.891563 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:07.098258 update_engine[1101]: I1002 19:32:07.098181 1101 update_attempter.cc:505] Updating boot flags... Oct 2 19:32:07.865926 kubelet[1417]: E1002 19:32:07.865863 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:07.892173 kubelet[1417]: E1002 19:32:07.892137 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:08.893230 kubelet[1417]: E1002 19:32:08.893184 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:09.894336 kubelet[1417]: E1002 19:32:09.894267 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:10.894790 kubelet[1417]: E1002 19:32:10.894740 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:11.438104 env[1112]: time="2023-10-02T19:32:11.438035112Z" level=error msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" failed" error="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3: 504 Gateway Time-out" Oct 2 19:32:11.438560 kubelet[1417]: E1002 19:32:11.438254 1417 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3: 504 Gateway Time-out" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Oct 2 19:32:11.438560 kubelet[1417]: E1002 19:32:11.438287 1417 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3: 504 Gateway Time-out" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Oct 2 19:32:11.438560 kubelet[1417]: E1002 19:32:11.438379 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:11.438560 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:11.438560 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:32:11.438560 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8f65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3: 504 Gateway Time-out Oct 2 19:32:11.438560 kubelet[1417]: E1002 19:32:11.438434 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3: 504 Gateway Time-out\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:32:11.895971 kubelet[1417]: E1002 19:32:11.895846 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:12.896568 kubelet[1417]: E1002 19:32:12.896491 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:13.896945 kubelet[1417]: E1002 19:32:13.896883 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.897579 kubelet[1417]: E1002 19:32:14.897535 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:15.898702 kubelet[1417]: E1002 19:32:15.898660 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:16.899778 kubelet[1417]: E1002 19:32:16.899721 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:17.900721 kubelet[1417]: E1002 19:32:17.900654 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:18.901625 kubelet[1417]: E1002 19:32:18.901575 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:19.901974 kubelet[1417]: E1002 19:32:19.901938 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:20.902671 kubelet[1417]: E1002 19:32:20.902606 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:21.903378 kubelet[1417]: E1002 19:32:21.903319 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:22.903912 kubelet[1417]: E1002 19:32:22.903876 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:23.904675 kubelet[1417]: E1002 19:32:23.904610 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.905121 kubelet[1417]: E1002 19:32:24.905054 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.332005 kubelet[1417]: E1002 19:32:25.331961 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:25.332694 kubelet[1417]: E1002 19:32:25.332663 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:32:25.905681 kubelet[1417]: E1002 19:32:25.905627 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:26.906214 kubelet[1417]: E1002 19:32:26.906144 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:27.866182 kubelet[1417]: E1002 19:32:27.866119 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:27.906379 kubelet[1417]: E1002 19:32:27.906351 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:28.907219 kubelet[1417]: E1002 19:32:28.907164 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:29.908122 kubelet[1417]: E1002 19:32:29.908073 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:30.908676 kubelet[1417]: E1002 19:32:30.908613 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:31.909164 kubelet[1417]: E1002 19:32:31.909099 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:32.909657 kubelet[1417]: E1002 19:32:32.909601 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:33.910256 kubelet[1417]: E1002 19:32:33.910185 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:34.911080 kubelet[1417]: E1002 19:32:34.911031 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:35.911986 kubelet[1417]: E1002 19:32:35.911939 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:36.912631 kubelet[1417]: E1002 19:32:36.912598 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:37.913548 kubelet[1417]: E1002 19:32:37.913494 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:38.913809 kubelet[1417]: E1002 19:32:38.913746 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:39.332317 kubelet[1417]: E1002 19:32:39.332284 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:39.333183 env[1112]: time="2023-10-02T19:32:39.333144295Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:32:39.914644 kubelet[1417]: E1002 19:32:39.914578 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:40.915690 kubelet[1417]: E1002 19:32:40.915642 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:41.916651 kubelet[1417]: E1002 19:32:41.916612 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:42.917274 kubelet[1417]: E1002 19:32:42.917212 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:43.669970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683951229.mount: Deactivated successfully. Oct 2 19:32:43.918187 kubelet[1417]: E1002 19:32:43.918123 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:44.918479 kubelet[1417]: E1002 19:32:44.918400 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:45.919478 kubelet[1417]: E1002 19:32:45.919417 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:46.920221 kubelet[1417]: E1002 19:32:46.920162 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:47.088376 env[1112]: time="2023-10-02T19:32:47.088303342Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:47.090176 env[1112]: time="2023-10-02T19:32:47.090144229Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:47.093777 env[1112]: time="2023-10-02T19:32:47.093746204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:47.094242 env[1112]: time="2023-10-02T19:32:47.094211988Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 19:32:47.095841 env[1112]: time="2023-10-02T19:32:47.095803436Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:32:47.107411 env[1112]: time="2023-10-02T19:32:47.107360900Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\"" Oct 2 19:32:47.107860 env[1112]: time="2023-10-02T19:32:47.107812588Z" level=info msg="StartContainer for \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\"" Oct 2 19:32:47.125229 systemd[1]: Started cri-containerd-9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8.scope. Oct 2 19:32:47.131560 systemd[1]: cri-containerd-9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8.scope: Deactivated successfully. Oct 2 19:32:47.131777 systemd[1]: Stopped cri-containerd-9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8.scope. Oct 2 19:32:47.135153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8-rootfs.mount: Deactivated successfully. Oct 2 19:32:47.531408 env[1112]: time="2023-10-02T19:32:47.531355914Z" level=info msg="shim disconnected" id=9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8 Oct 2 19:32:47.531408 env[1112]: time="2023-10-02T19:32:47.531403694Z" level=warning msg="cleaning up after shim disconnected" id=9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8 namespace=k8s.io Oct 2 19:32:47.531408 env[1112]: time="2023-10-02T19:32:47.531412080Z" level=info msg="cleaning up dead shim" Oct 2 19:32:47.537803 env[1112]: time="2023-10-02T19:32:47.537771392Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1757 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:47.538077 env[1112]: time="2023-10-02T19:32:47.537978671Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:32:47.538268 env[1112]: time="2023-10-02T19:32:47.538224954Z" level=error msg="Failed to pipe stderr of container \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\"" error="reading from a closed fifo" Oct 2 19:32:47.538268 env[1112]: time="2023-10-02T19:32:47.538197132Z" level=error msg="Failed to pipe stdout of container \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\"" error="reading from a closed fifo" Oct 2 19:32:47.541065 env[1112]: time="2023-10-02T19:32:47.541015334Z" level=error msg="StartContainer for \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:47.541288 kubelet[1417]: E1002 19:32:47.541261 1417 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8" Oct 2 19:32:47.541423 kubelet[1417]: E1002 19:32:47.541382 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:47.541423 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:47.541423 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:32:47.541423 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8f65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:47.541423 kubelet[1417]: E1002 19:32:47.541417 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:32:47.865807 kubelet[1417]: E1002 19:32:47.865679 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:47.921105 kubelet[1417]: E1002 19:32:47.921082 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:48.465932 kubelet[1417]: E1002 19:32:48.465894 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:48.467470 env[1112]: time="2023-10-02T19:32:48.467394389Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:32:48.481786 env[1112]: time="2023-10-02T19:32:48.481737273Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\"" Oct 2 19:32:48.482401 env[1112]: time="2023-10-02T19:32:48.482348491Z" level=info msg="StartContainer for \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\"" Oct 2 19:32:48.496428 systemd[1]: Started cri-containerd-a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a.scope. Oct 2 19:32:48.506813 systemd[1]: cri-containerd-a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a.scope: Deactivated successfully. Oct 2 19:32:48.507098 systemd[1]: Stopped cri-containerd-a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a.scope. Oct 2 19:32:48.514715 env[1112]: time="2023-10-02T19:32:48.514642073Z" level=info msg="shim disconnected" id=a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a Oct 2 19:32:48.514715 env[1112]: time="2023-10-02T19:32:48.514700994Z" level=warning msg="cleaning up after shim disconnected" id=a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a namespace=k8s.io Oct 2 19:32:48.514715 env[1112]: time="2023-10-02T19:32:48.514717004Z" level=info msg="cleaning up dead shim" Oct 2 19:32:48.521730 env[1112]: time="2023-10-02T19:32:48.521676673Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1794 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:48Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:48.522037 env[1112]: time="2023-10-02T19:32:48.521973341Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:32:48.522291 env[1112]: time="2023-10-02T19:32:48.522221366Z" level=error msg="Failed to pipe stderr of container \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\"" error="reading from a closed fifo" Oct 2 19:32:48.524451 env[1112]: time="2023-10-02T19:32:48.524406450Z" level=error msg="Failed to pipe stdout of container \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\"" error="reading from a closed fifo" Oct 2 19:32:48.526902 env[1112]: time="2023-10-02T19:32:48.526862091Z" level=error msg="StartContainer for \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:48.527072 kubelet[1417]: E1002 19:32:48.527055 1417 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a" Oct 2 19:32:48.527162 kubelet[1417]: E1002 19:32:48.527157 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:48.527162 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:48.527162 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:32:48.527162 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8f65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:48.527328 kubelet[1417]: E1002 19:32:48.527188 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:32:48.922235 kubelet[1417]: E1002 19:32:48.922121 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:49.468306 kubelet[1417]: I1002 19:32:49.468272 1417 scope.go:115] "RemoveContainer" containerID="9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8" Oct 2 19:32:49.468582 kubelet[1417]: I1002 19:32:49.468562 1417 scope.go:115] "RemoveContainer" containerID="9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8" Oct 2 19:32:49.469521 env[1112]: time="2023-10-02T19:32:49.469422217Z" level=info msg="RemoveContainer for \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\"" Oct 2 19:32:49.469832 env[1112]: time="2023-10-02T19:32:49.469625920Z" level=info msg="RemoveContainer for \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\"" Oct 2 19:32:49.469832 env[1112]: time="2023-10-02T19:32:49.469737109Z" level=error msg="RemoveContainer for \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\" failed" error="failed to set removing state for container \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\": container is already in removing state" Oct 2 19:32:49.469930 kubelet[1417]: E1002 19:32:49.469902 1417 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\": container is already in removing state" containerID="9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8" Oct 2 19:32:49.469930 kubelet[1417]: E1002 19:32:49.469944 1417 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8": container is already in removing state; Skipping pod "cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)" Oct 2 19:32:49.470124 kubelet[1417]: E1002 19:32:49.470000 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:49.470203 kubelet[1417]: E1002 19:32:49.470181 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:32:49.472272 env[1112]: time="2023-10-02T19:32:49.472248655Z" level=info msg="RemoveContainer for \"9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8\" returns successfully" Oct 2 19:32:49.476778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a-rootfs.mount: Deactivated successfully. Oct 2 19:32:49.922671 kubelet[1417]: E1002 19:32:49.922550 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:50.636838 kubelet[1417]: W1002 19:32:50.636714 1417 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe32c401_8569_4283_914d_545fbac827dd.slice/cri-containerd-9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8.scope WatchSource:0}: container "9f20d114d98a001ba20bc0b88f5b6915a8bcea7ff2ef5f95e0a9192e130be7c8" in namespace "k8s.io": not found Oct 2 19:32:50.923804 kubelet[1417]: E1002 19:32:50.923774 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:51.924220 kubelet[1417]: E1002 19:32:51.924166 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:52.925011 kubelet[1417]: E1002 19:32:52.924971 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:53.744219 kubelet[1417]: W1002 19:32:53.744168 1417 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe32c401_8569_4283_914d_545fbac827dd.slice/cri-containerd-a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a.scope WatchSource:0}: task a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a not found: not found Oct 2 19:32:53.925336 kubelet[1417]: E1002 19:32:53.925267 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:54.926418 kubelet[1417]: E1002 19:32:54.926377 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:55.926671 kubelet[1417]: E1002 19:32:55.926625 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:56.927007 kubelet[1417]: E1002 19:32:56.926959 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:57.927765 kubelet[1417]: E1002 19:32:57.927723 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:58.928260 kubelet[1417]: E1002 19:32:58.928208 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:59.929207 kubelet[1417]: E1002 19:32:59.929173 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:00.930180 kubelet[1417]: E1002 19:33:00.930145 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:01.930262 kubelet[1417]: E1002 19:33:01.930215 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:02.930407 kubelet[1417]: E1002 19:33:02.930365 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:03.931454 kubelet[1417]: E1002 19:33:03.931375 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:04.333038 kubelet[1417]: E1002 19:33:04.332756 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:04.334685 env[1112]: time="2023-10-02T19:33:04.334634788Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:33:04.345252 env[1112]: time="2023-10-02T19:33:04.345210840Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\"" Oct 2 19:33:04.345591 env[1112]: time="2023-10-02T19:33:04.345563771Z" level=info msg="StartContainer for \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\"" Oct 2 19:33:04.358660 systemd[1]: run-containerd-runc-k8s.io-8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b-runc.nxP4lj.mount: Deactivated successfully. Oct 2 19:33:04.359906 systemd[1]: Started cri-containerd-8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b.scope. Oct 2 19:33:04.369878 systemd[1]: cri-containerd-8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b.scope: Deactivated successfully. Oct 2 19:33:04.370172 systemd[1]: Stopped cri-containerd-8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b.scope. Oct 2 19:33:04.378071 env[1112]: time="2023-10-02T19:33:04.378013591Z" level=info msg="shim disconnected" id=8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b Oct 2 19:33:04.378214 env[1112]: time="2023-10-02T19:33:04.378085470Z" level=warning msg="cleaning up after shim disconnected" id=8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b namespace=k8s.io Oct 2 19:33:04.378214 env[1112]: time="2023-10-02T19:33:04.378095639Z" level=info msg="cleaning up dead shim" Oct 2 19:33:04.384229 env[1112]: time="2023-10-02T19:33:04.384165865Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1829 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:33:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:33:04.384518 env[1112]: time="2023-10-02T19:33:04.384431668Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:33:04.385634 env[1112]: time="2023-10-02T19:33:04.385581086Z" level=error msg="Failed to pipe stdout of container \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\"" error="reading from a closed fifo" Oct 2 19:33:04.385697 env[1112]: time="2023-10-02T19:33:04.385668235Z" level=error msg="Failed to pipe stderr of container \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\"" error="reading from a closed fifo" Oct 2 19:33:04.387798 env[1112]: time="2023-10-02T19:33:04.387762717Z" level=error msg="StartContainer for \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:33:04.388017 kubelet[1417]: E1002 19:33:04.387991 1417 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b" Oct 2 19:33:04.388093 kubelet[1417]: E1002 19:33:04.388088 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:33:04.388093 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:33:04.388093 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:33:04.388093 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8f65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:33:04.388240 kubelet[1417]: E1002 19:33:04.388121 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:33:04.494394 kubelet[1417]: I1002 19:33:04.494355 1417 scope.go:115] "RemoveContainer" containerID="a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a" Oct 2 19:33:04.494689 kubelet[1417]: I1002 19:33:04.494669 1417 scope.go:115] "RemoveContainer" containerID="a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a" Oct 2 19:33:04.495714 env[1112]: time="2023-10-02T19:33:04.495646467Z" level=info msg="RemoveContainer for \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\"" Oct 2 19:33:04.496040 env[1112]: time="2023-10-02T19:33:04.496012504Z" level=info msg="RemoveContainer for \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\"" Oct 2 19:33:04.496483 env[1112]: time="2023-10-02T19:33:04.496432214Z" level=error msg="RemoveContainer for \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\" failed" error="failed to set removing state for container \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\": container is already in removing state" Oct 2 19:33:04.496678 kubelet[1417]: E1002 19:33:04.496617 1417 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\": container is already in removing state" containerID="a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a" Oct 2 19:33:04.496678 kubelet[1417]: E1002 19:33:04.496643 1417 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a": container is already in removing state; Skipping pod "cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)" Oct 2 19:33:04.496762 kubelet[1417]: E1002 19:33:04.496696 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:04.496887 kubelet[1417]: E1002 19:33:04.496867 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:33:04.499422 env[1112]: time="2023-10-02T19:33:04.499382789Z" level=info msg="RemoveContainer for \"a23cdde7b82f7b64d654b473d099ba198b6c6a21bc00d3ad0a25ccbcf548301a\" returns successfully" Oct 2 19:33:04.932061 kubelet[1417]: E1002 19:33:04.932000 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:05.341341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b-rootfs.mount: Deactivated successfully. Oct 2 19:33:05.932287 kubelet[1417]: E1002 19:33:05.932235 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:06.932879 kubelet[1417]: E1002 19:33:06.932825 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:07.482775 kubelet[1417]: W1002 19:33:07.482740 1417 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe32c401_8569_4283_914d_545fbac827dd.slice/cri-containerd-8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b.scope WatchSource:0}: task 8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b not found: not found Oct 2 19:33:07.866634 kubelet[1417]: E1002 19:33:07.866462 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:07.932953 kubelet[1417]: E1002 19:33:07.932908 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:08.933635 kubelet[1417]: E1002 19:33:08.933590 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:09.934132 kubelet[1417]: E1002 19:33:09.934092 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:10.934944 kubelet[1417]: E1002 19:33:10.934900 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:11.935690 kubelet[1417]: E1002 19:33:11.935591 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:12.935872 kubelet[1417]: E1002 19:33:12.935809 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:13.936134 kubelet[1417]: E1002 19:33:13.936077 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:14.332880 kubelet[1417]: E1002 19:33:14.332752 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:14.936554 kubelet[1417]: E1002 19:33:14.936524 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:15.937155 kubelet[1417]: E1002 19:33:15.937110 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:16.332726 kubelet[1417]: E1002 19:33:16.332410 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:16.332726 kubelet[1417]: E1002 19:33:16.332669 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:33:16.937386 kubelet[1417]: E1002 19:33:16.937318 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:17.938269 kubelet[1417]: E1002 19:33:17.938232 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:18.938564 kubelet[1417]: E1002 19:33:18.938518 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:19.939342 kubelet[1417]: E1002 19:33:19.939277 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:20.940193 kubelet[1417]: E1002 19:33:20.940163 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:21.940670 kubelet[1417]: E1002 19:33:21.940612 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:22.941207 kubelet[1417]: E1002 19:33:22.941180 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:23.942054 kubelet[1417]: E1002 19:33:23.942002 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:24.942627 kubelet[1417]: E1002 19:33:24.942592 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:25.943175 kubelet[1417]: E1002 19:33:25.943145 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:26.944077 kubelet[1417]: E1002 19:33:26.944034 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:27.332721 kubelet[1417]: E1002 19:33:27.332588 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:27.334097 env[1112]: time="2023-10-02T19:33:27.334060559Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:33:27.344238 env[1112]: time="2023-10-02T19:33:27.344203274Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\"" Oct 2 19:33:27.344563 env[1112]: time="2023-10-02T19:33:27.344528404Z" level=info msg="StartContainer for \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\"" Oct 2 19:33:27.358319 systemd[1]: Started cri-containerd-9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f.scope. Oct 2 19:33:27.364432 systemd[1]: cri-containerd-9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f.scope: Deactivated successfully. Oct 2 19:33:27.364647 systemd[1]: Stopped cri-containerd-9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f.scope. Oct 2 19:33:27.372106 env[1112]: time="2023-10-02T19:33:27.372045587Z" level=info msg="shim disconnected" id=9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f Oct 2 19:33:27.372242 env[1112]: time="2023-10-02T19:33:27.372115762Z" level=warning msg="cleaning up after shim disconnected" id=9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f namespace=k8s.io Oct 2 19:33:27.372242 env[1112]: time="2023-10-02T19:33:27.372125600Z" level=info msg="cleaning up dead shim" Oct 2 19:33:27.378149 env[1112]: time="2023-10-02T19:33:27.378090571Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1868 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:33:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:33:27.378379 env[1112]: time="2023-10-02T19:33:27.378324227Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:33:27.378578 env[1112]: time="2023-10-02T19:33:27.378536661Z" level=error msg="Failed to pipe stdout of container \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\"" error="reading from a closed fifo" Oct 2 19:33:27.378698 env[1112]: time="2023-10-02T19:33:27.378668743Z" level=error msg="Failed to pipe stderr of container \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\"" error="reading from a closed fifo" Oct 2 19:33:27.380858 env[1112]: time="2023-10-02T19:33:27.380830133Z" level=error msg="StartContainer for \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:33:27.381178 kubelet[1417]: E1002 19:33:27.381151 1417 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f" Oct 2 19:33:27.381299 kubelet[1417]: E1002 19:33:27.381277 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:33:27.381299 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:33:27.381299 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:33:27.381299 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8f65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:33:27.381452 kubelet[1417]: E1002 19:33:27.381318 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:33:27.529872 kubelet[1417]: I1002 19:33:27.529837 1417 scope.go:115] "RemoveContainer" containerID="8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b" Oct 2 19:33:27.530207 kubelet[1417]: I1002 19:33:27.530183 1417 scope.go:115] "RemoveContainer" containerID="8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b" Oct 2 19:33:27.530821 env[1112]: time="2023-10-02T19:33:27.530776598Z" level=info msg="RemoveContainer for \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\"" Oct 2 19:33:27.531089 env[1112]: time="2023-10-02T19:33:27.531057835Z" level=info msg="RemoveContainer for \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\"" Oct 2 19:33:27.531170 env[1112]: time="2023-10-02T19:33:27.531136304Z" level=error msg="RemoveContainer for \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\" failed" error="failed to set removing state for container \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\": container is already in removing state" Oct 2 19:33:27.531270 kubelet[1417]: E1002 19:33:27.531256 1417 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\": container is already in removing state" containerID="8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b" Oct 2 19:33:27.531329 kubelet[1417]: E1002 19:33:27.531282 1417 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b": container is already in removing state; Skipping pod "cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)" Oct 2 19:33:27.531362 kubelet[1417]: E1002 19:33:27.531335 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:27.531571 kubelet[1417]: E1002 19:33:27.531560 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:33:27.608242 env[1112]: time="2023-10-02T19:33:27.608150452Z" level=info msg="RemoveContainer for \"8e0bcc96bb34ddc8385d9675c7b7d2d65f33f9b6fb442fd826a756e9aa9a456b\" returns successfully" Oct 2 19:33:27.866311 kubelet[1417]: E1002 19:33:27.866191 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:27.944527 kubelet[1417]: E1002 19:33:27.944468 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:27.960731 kubelet[1417]: E1002 19:33:27.960706 1417 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:33:28.336487 kubelet[1417]: E1002 19:33:28.336447 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:28.340324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f-rootfs.mount: Deactivated successfully. Oct 2 19:33:28.944782 kubelet[1417]: E1002 19:33:28.944727 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:29.945751 kubelet[1417]: E1002 19:33:29.945666 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:30.476840 kubelet[1417]: W1002 19:33:30.476771 1417 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe32c401_8569_4283_914d_545fbac827dd.slice/cri-containerd-9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f.scope WatchSource:0}: task 9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f not found: not found Oct 2 19:33:30.946100 kubelet[1417]: E1002 19:33:30.946042 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:31.946236 kubelet[1417]: E1002 19:33:31.946164 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:32.946406 kubelet[1417]: E1002 19:33:32.946352 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:33.337919 kubelet[1417]: E1002 19:33:33.337817 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:33.946856 kubelet[1417]: E1002 19:33:33.946799 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:34.947458 kubelet[1417]: E1002 19:33:34.947384 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:35.947990 kubelet[1417]: E1002 19:33:35.947929 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:36.948828 kubelet[1417]: E1002 19:33:36.948768 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:37.949611 kubelet[1417]: E1002 19:33:37.949548 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:38.339180 kubelet[1417]: E1002 19:33:38.339073 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:38.949719 kubelet[1417]: E1002 19:33:38.949649 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:39.332572 kubelet[1417]: E1002 19:33:39.332399 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:39.332752 kubelet[1417]: E1002 19:33:39.332616 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:33:39.950688 kubelet[1417]: E1002 19:33:39.950613 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:40.950803 kubelet[1417]: E1002 19:33:40.950749 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:41.951681 kubelet[1417]: E1002 19:33:41.951582 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:42.952683 kubelet[1417]: E1002 19:33:42.952581 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:43.339957 kubelet[1417]: E1002 19:33:43.339841 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:43.953480 kubelet[1417]: E1002 19:33:43.953430 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:44.953793 kubelet[1417]: E1002 19:33:44.953730 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:45.954713 kubelet[1417]: E1002 19:33:45.954646 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:46.955647 kubelet[1417]: E1002 19:33:46.955580 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:47.865690 kubelet[1417]: E1002 19:33:47.865622 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:47.956187 kubelet[1417]: E1002 19:33:47.956142 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:48.340929 kubelet[1417]: E1002 19:33:48.340888 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:48.956815 kubelet[1417]: E1002 19:33:48.956755 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:49.957194 kubelet[1417]: E1002 19:33:49.957139 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:50.957722 kubelet[1417]: E1002 19:33:50.957662 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:51.332621 kubelet[1417]: E1002 19:33:51.332483 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:51.332795 kubelet[1417]: E1002 19:33:51.332684 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:33:51.958220 kubelet[1417]: E1002 19:33:51.958175 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:52.959320 kubelet[1417]: E1002 19:33:52.959271 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:53.342590 kubelet[1417]: E1002 19:33:53.342464 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:53.959748 kubelet[1417]: E1002 19:33:53.959698 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:54.960000 kubelet[1417]: E1002 19:33:54.959939 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:55.960303 kubelet[1417]: E1002 19:33:55.960247 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:56.960755 kubelet[1417]: E1002 19:33:56.960694 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:57.961013 kubelet[1417]: E1002 19:33:57.960961 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:58.343447 kubelet[1417]: E1002 19:33:58.343333 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:58.961260 kubelet[1417]: E1002 19:33:58.961202 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:59.962148 kubelet[1417]: E1002 19:33:59.962086 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:00.963002 kubelet[1417]: E1002 19:34:00.962942 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:01.963695 kubelet[1417]: E1002 19:34:01.963659 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:02.964831 kubelet[1417]: E1002 19:34:02.964769 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:03.332942 kubelet[1417]: E1002 19:34:03.332816 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:03.333092 kubelet[1417]: E1002 19:34:03.333018 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:34:03.344337 kubelet[1417]: E1002 19:34:03.344314 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:03.965611 kubelet[1417]: E1002 19:34:03.965556 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:04.966718 kubelet[1417]: E1002 19:34:04.966665 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:05.967785 kubelet[1417]: E1002 19:34:05.967729 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:06.968438 kubelet[1417]: E1002 19:34:06.968383 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:07.865829 kubelet[1417]: E1002 19:34:07.865766 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:07.969330 kubelet[1417]: E1002 19:34:07.969306 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:08.344931 kubelet[1417]: E1002 19:34:08.344895 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:08.969911 kubelet[1417]: E1002 19:34:08.969836 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:09.970377 kubelet[1417]: E1002 19:34:09.970303 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:10.970736 kubelet[1417]: E1002 19:34:10.970673 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:11.970861 kubelet[1417]: E1002 19:34:11.970779 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:12.971276 kubelet[1417]: E1002 19:34:12.971219 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:13.345861 kubelet[1417]: E1002 19:34:13.345747 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:13.972035 kubelet[1417]: E1002 19:34:13.971977 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:14.972743 kubelet[1417]: E1002 19:34:14.972696 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:15.973105 kubelet[1417]: E1002 19:34:15.973046 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:16.973253 kubelet[1417]: E1002 19:34:16.973174 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:17.332552 kubelet[1417]: E1002 19:34:17.332407 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:17.334364 env[1112]: time="2023-10-02T19:34:17.334323504Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:34:17.344494 env[1112]: time="2023-10-02T19:34:17.344459407Z" level=info msg="CreateContainer within sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65\"" Oct 2 19:34:17.344812 env[1112]: time="2023-10-02T19:34:17.344784521Z" level=info msg="StartContainer for \"1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65\"" Oct 2 19:34:17.359467 systemd[1]: Started cri-containerd-1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65.scope. Oct 2 19:34:17.366805 systemd[1]: cri-containerd-1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65.scope: Deactivated successfully. Oct 2 19:34:17.367092 systemd[1]: Stopped cri-containerd-1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65.scope. Oct 2 19:34:17.369886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65-rootfs.mount: Deactivated successfully. Oct 2 19:34:17.374267 env[1112]: time="2023-10-02T19:34:17.374223271Z" level=info msg="shim disconnected" id=1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65 Oct 2 19:34:17.374267 env[1112]: time="2023-10-02T19:34:17.374266713Z" level=warning msg="cleaning up after shim disconnected" id=1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65 namespace=k8s.io Oct 2 19:34:17.374431 env[1112]: time="2023-10-02T19:34:17.374275319Z" level=info msg="cleaning up dead shim" Oct 2 19:34:17.380389 env[1112]: time="2023-10-02T19:34:17.380339773Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1911 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:17Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:17.380671 env[1112]: time="2023-10-02T19:34:17.380617367Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:34:17.380824 env[1112]: time="2023-10-02T19:34:17.380789182Z" level=error msg="Failed to pipe stdout of container \"1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65\"" error="reading from a closed fifo" Oct 2 19:34:17.380889 env[1112]: time="2023-10-02T19:34:17.380834888Z" level=error msg="Failed to pipe stderr of container \"1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65\"" error="reading from a closed fifo" Oct 2 19:34:17.383082 env[1112]: time="2023-10-02T19:34:17.383046465Z" level=error msg="StartContainer for \"1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:17.383319 kubelet[1417]: E1002 19:34:17.383289 1417 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65" Oct 2 19:34:17.383415 kubelet[1417]: E1002 19:34:17.383401 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:17.383415 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:17.383415 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:34:17.383415 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8f65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:17.383555 kubelet[1417]: E1002 19:34:17.383439 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:34:17.605324 kubelet[1417]: I1002 19:34:17.605203 1417 scope.go:115] "RemoveContainer" containerID="9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f" Oct 2 19:34:17.605540 kubelet[1417]: I1002 19:34:17.605495 1417 scope.go:115] "RemoveContainer" containerID="9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f" Oct 2 19:34:17.606991 env[1112]: time="2023-10-02T19:34:17.606913999Z" level=info msg="RemoveContainer for \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\"" Oct 2 19:34:17.607336 env[1112]: time="2023-10-02T19:34:17.607308063Z" level=info msg="RemoveContainer for \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\"" Oct 2 19:34:17.607427 env[1112]: time="2023-10-02T19:34:17.607391170Z" level=error msg="RemoveContainer for \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\" failed" error="failed to set removing state for container \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\": container is already in removing state" Oct 2 19:34:17.607655 kubelet[1417]: E1002 19:34:17.607624 1417 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\": container is already in removing state" containerID="9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f" Oct 2 19:34:17.607655 kubelet[1417]: E1002 19:34:17.607661 1417 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f": container is already in removing state; Skipping pod "cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)" Oct 2 19:34:17.607853 kubelet[1417]: E1002 19:34:17.607733 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:17.608003 kubelet[1417]: E1002 19:34:17.607975 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:34:17.609592 env[1112]: time="2023-10-02T19:34:17.609562140Z" level=info msg="RemoveContainer for \"9d3cb83956e9800abc37e1d633769564ecb3e055d499f01f5c5bb9d88f4d824f\" returns successfully" Oct 2 19:34:17.973644 kubelet[1417]: E1002 19:34:17.973585 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:18.346481 kubelet[1417]: E1002 19:34:18.346366 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:18.974732 kubelet[1417]: E1002 19:34:18.974649 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:19.975714 kubelet[1417]: E1002 19:34:19.975612 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:20.479678 kubelet[1417]: W1002 19:34:20.479599 1417 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe32c401_8569_4283_914d_545fbac827dd.slice/cri-containerd-1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65.scope WatchSource:0}: task 1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65 not found: not found Oct 2 19:34:20.976349 kubelet[1417]: E1002 19:34:20.976284 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:21.977094 kubelet[1417]: E1002 19:34:21.977022 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:22.977594 kubelet[1417]: E1002 19:34:22.977548 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:23.347606 kubelet[1417]: E1002 19:34:23.347528 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:23.978607 kubelet[1417]: E1002 19:34:23.978574 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:24.979067 kubelet[1417]: E1002 19:34:24.979015 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:25.980117 kubelet[1417]: E1002 19:34:25.980064 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:26.980361 kubelet[1417]: E1002 19:34:26.980313 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:27.866427 kubelet[1417]: E1002 19:34:27.866391 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:27.980741 kubelet[1417]: E1002 19:34:27.980707 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:28.348740 kubelet[1417]: E1002 19:34:28.348717 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:28.981486 kubelet[1417]: E1002 19:34:28.981431 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:29.982245 kubelet[1417]: E1002 19:34:29.982200 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:30.332457 kubelet[1417]: E1002 19:34:30.332200 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:30.332629 kubelet[1417]: E1002 19:34:30.332479 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-q5m6r_kube-system(fe32c401-8569-4283-914d-545fbac827dd)\"" pod="kube-system/cilium-q5m6r" podUID=fe32c401-8569-4283-914d-545fbac827dd Oct 2 19:34:30.982723 kubelet[1417]: E1002 19:34:30.982672 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:31.983601 kubelet[1417]: E1002 19:34:31.983537 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:32.984106 kubelet[1417]: E1002 19:34:32.984062 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:33.350328 kubelet[1417]: E1002 19:34:33.350212 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:33.985016 kubelet[1417]: E1002 19:34:33.984962 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:34.985236 kubelet[1417]: E1002 19:34:34.985167 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:35.226075 env[1112]: time="2023-10-02T19:34:35.225991049Z" level=info msg="StopPodSandbox for \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\"" Oct 2 19:34:35.226612 env[1112]: time="2023-10-02T19:34:35.226108851Z" level=info msg="Container to stop \"1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:34:35.227687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126-shm.mount: Deactivated successfully. Oct 2 19:34:35.231691 systemd[1]: cri-containerd-1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126.scope: Deactivated successfully. Oct 2 19:34:35.230000 audit: BPF prog-id=64 op=UNLOAD Oct 2 19:34:35.232537 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:34:35.232618 kernel: audit: type=1334 audit(1696275275.230:644): prog-id=64 op=UNLOAD Oct 2 19:34:35.235000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:34:35.238528 kernel: audit: type=1334 audit(1696275275.235:645): prog-id=68 op=UNLOAD Oct 2 19:34:35.247465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126-rootfs.mount: Deactivated successfully. Oct 2 19:34:35.251302 env[1112]: time="2023-10-02T19:34:35.251232712Z" level=info msg="shim disconnected" id=1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126 Oct 2 19:34:35.251399 env[1112]: time="2023-10-02T19:34:35.251304717Z" level=warning msg="cleaning up after shim disconnected" id=1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126 namespace=k8s.io Oct 2 19:34:35.251399 env[1112]: time="2023-10-02T19:34:35.251328753Z" level=info msg="cleaning up dead shim" Oct 2 19:34:35.258201 env[1112]: time="2023-10-02T19:34:35.258142067Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1943 runtime=io.containerd.runc.v2\n" Oct 2 19:34:35.258565 env[1112]: time="2023-10-02T19:34:35.258533907Z" level=info msg="TearDown network for sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" successfully" Oct 2 19:34:35.258622 env[1112]: time="2023-10-02T19:34:35.258567139Z" level=info msg="StopPodSandbox for \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" returns successfully" Oct 2 19:34:35.337833 kubelet[1417]: I1002 19:34:35.337766 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe32c401-8569-4283-914d-545fbac827dd-cilium-config-path\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.337833 kubelet[1417]: I1002 19:34:35.337815 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-bpf-maps\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.337833 kubelet[1417]: I1002 19:34:35.337839 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe32c401-8569-4283-914d-545fbac827dd-clustermesh-secrets\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.337863 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe32c401-8569-4283-914d-545fbac827dd-hubble-tls\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.337884 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cilium-cgroup\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.337904 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-host-proc-sys-net\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.337925 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-xtables-lock\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.337922 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.337954 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-hostproc\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.337975 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cni-path\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.337984 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.337996 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cilium-run\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.338022 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8f65\" (UniqueName: \"kubernetes.io/projected/fe32c401-8569-4283-914d-545fbac827dd-kube-api-access-l8f65\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.338048 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.338056 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-host-proc-sys-kernel\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.338091 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-lib-modules\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.338084 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338170 kubelet[1417]: I1002 19:34:35.338108 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-etc-cni-netd\") pod \"fe32c401-8569-4283-914d-545fbac827dd\" (UID: \"fe32c401-8569-4283-914d-545fbac827dd\") " Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338119 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338129 1417 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cilium-cgroup\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338139 1417 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-host-proc-sys-net\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338141 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-hostproc" (OuterVolumeSpecName: "hostproc") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338147 1417 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-host-proc-sys-kernel\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338156 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338167 1417 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-bpf-maps\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338168 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338182 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338784 kubelet[1417]: I1002 19:34:35.338193 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cni-path" (OuterVolumeSpecName: "cni-path") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:35.338784 kubelet[1417]: W1002 19:34:35.338269 1417 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/fe32c401-8569-4283-914d-545fbac827dd/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:34:35.340201 kubelet[1417]: I1002 19:34:35.340173 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe32c401-8569-4283-914d-545fbac827dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:34:35.341109 kubelet[1417]: I1002 19:34:35.341074 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe32c401-8569-4283-914d-545fbac827dd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:34:35.341213 kubelet[1417]: I1002 19:34:35.341178 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe32c401-8569-4283-914d-545fbac827dd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:34:35.341816 kubelet[1417]: I1002 19:34:35.341790 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe32c401-8569-4283-914d-545fbac827dd-kube-api-access-l8f65" (OuterVolumeSpecName: "kube-api-access-l8f65") pod "fe32c401-8569-4283-914d-545fbac827dd" (UID: "fe32c401-8569-4283-914d-545fbac827dd"). InnerVolumeSpecName "kube-api-access-l8f65". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:34:35.342348 systemd[1]: var-lib-kubelet-pods-fe32c401\x2d8569\x2d4283\x2d914d\x2d545fbac827dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl8f65.mount: Deactivated successfully. Oct 2 19:34:35.342463 systemd[1]: var-lib-kubelet-pods-fe32c401\x2d8569\x2d4283\x2d914d\x2d545fbac827dd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:34:35.342559 systemd[1]: var-lib-kubelet-pods-fe32c401\x2d8569\x2d4283\x2d914d\x2d545fbac827dd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:34:35.438816 kubelet[1417]: I1002 19:34:35.438758 1417 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-etc-cni-netd\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.438816 kubelet[1417]: I1002 19:34:35.438791 1417 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cni-path\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.438816 kubelet[1417]: I1002 19:34:35.438800 1417 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-cilium-run\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.438816 kubelet[1417]: I1002 19:34:35.438810 1417 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l8f65\" (UniqueName: \"kubernetes.io/projected/fe32c401-8569-4283-914d-545fbac827dd-kube-api-access-l8f65\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.438816 kubelet[1417]: I1002 19:34:35.438819 1417 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-lib-modules\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.438816 kubelet[1417]: I1002 19:34:35.438828 1417 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe32c401-8569-4283-914d-545fbac827dd-cilium-config-path\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.438816 kubelet[1417]: I1002 19:34:35.438836 1417 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe32c401-8569-4283-914d-545fbac827dd-clustermesh-secrets\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.438816 kubelet[1417]: I1002 19:34:35.438847 1417 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe32c401-8569-4283-914d-545fbac827dd-hubble-tls\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.439194 kubelet[1417]: I1002 19:34:35.438856 1417 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-hostproc\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.439194 kubelet[1417]: I1002 19:34:35.438864 1417 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe32c401-8569-4283-914d-545fbac827dd-xtables-lock\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:34:35.634288 kubelet[1417]: I1002 19:34:35.634152 1417 scope.go:115] "RemoveContainer" containerID="1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65" Oct 2 19:34:35.636409 env[1112]: time="2023-10-02T19:34:35.636357814Z" level=info msg="RemoveContainer for \"1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65\"" Oct 2 19:34:35.639025 systemd[1]: Removed slice kubepods-burstable-podfe32c401_8569_4283_914d_545fbac827dd.slice. Oct 2 19:34:35.639183 env[1112]: time="2023-10-02T19:34:35.639123382Z" level=info msg="RemoveContainer for \"1b9730bc1aade70f8577fcaf7af0766c2bbea967ef74846ef9199a5bb20a9c65\" returns successfully" Oct 2 19:34:35.985722 kubelet[1417]: E1002 19:34:35.985663 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:36.334450 kubelet[1417]: I1002 19:34:36.334302 1417 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=fe32c401-8569-4283-914d-545fbac827dd path="/var/lib/kubelet/pods/fe32c401-8569-4283-914d-545fbac827dd/volumes" Oct 2 19:34:36.986313 kubelet[1417]: E1002 19:34:36.986223 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:37.783081 kubelet[1417]: I1002 19:34:37.783025 1417 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:34:37.783081 kubelet[1417]: E1002 19:34:37.783088 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.783081 kubelet[1417]: E1002 19:34:37.783096 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.783330 kubelet[1417]: E1002 19:34:37.783102 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.783330 kubelet[1417]: I1002 19:34:37.783121 1417 memory_manager.go:346] "RemoveStaleState removing state" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.783330 kubelet[1417]: I1002 19:34:37.783126 1417 memory_manager.go:346] "RemoveStaleState removing state" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.783330 kubelet[1417]: I1002 19:34:37.783131 1417 memory_manager.go:346] "RemoveStaleState removing state" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.783330 kubelet[1417]: I1002 19:34:37.783136 1417 memory_manager.go:346] "RemoveStaleState removing state" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.787267 systemd[1]: Created slice kubepods-besteffort-podf76574a0_7e88_4153_b749_73771b95d06c.slice. Oct 2 19:34:37.808298 kubelet[1417]: I1002 19:34:37.808277 1417 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:34:37.808373 kubelet[1417]: E1002 19:34:37.808308 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.808373 kubelet[1417]: I1002 19:34:37.808323 1417 memory_manager.go:346] "RemoveStaleState removing state" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.808373 kubelet[1417]: E1002 19:34:37.808334 1417 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe32c401-8569-4283-914d-545fbac827dd" containerName="mount-cgroup" Oct 2 19:34:37.811812 systemd[1]: Created slice kubepods-burstable-poda87e0494_243f_410e_874d_9f2966aa9c2b.slice. Oct 2 19:34:37.849576 kubelet[1417]: I1002 19:34:37.849542 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-xtables-lock\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849728 kubelet[1417]: I1002 19:34:37.849594 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmm7\" (UniqueName: \"kubernetes.io/projected/a87e0494-243f-410e-874d-9f2966aa9c2b-kube-api-access-psmm7\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849728 kubelet[1417]: I1002 19:34:37.849625 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-cgroup\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849728 kubelet[1417]: I1002 19:34:37.849668 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-config-path\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849728 kubelet[1417]: I1002 19:34:37.849706 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a87e0494-243f-410e-874d-9f2966aa9c2b-hubble-tls\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849892 kubelet[1417]: I1002 19:34:37.849739 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-hostproc\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849892 kubelet[1417]: I1002 19:34:37.849783 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a87e0494-243f-410e-874d-9f2966aa9c2b-clustermesh-secrets\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849892 kubelet[1417]: I1002 19:34:37.849809 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-ipsec-secrets\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849892 kubelet[1417]: I1002 19:34:37.849846 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cni-path\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849892 kubelet[1417]: I1002 19:34:37.849864 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-etc-cni-netd\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.849892 kubelet[1417]: I1002 19:34:37.849879 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-lib-modules\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.850110 kubelet[1417]: I1002 19:34:37.849913 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f76574a0-7e88-4153-b749-73771b95d06c-cilium-config-path\") pod \"cilium-operator-574c4bb98d-wdgvt\" (UID: \"f76574a0-7e88-4153-b749-73771b95d06c\") " pod="kube-system/cilium-operator-574c4bb98d-wdgvt" Oct 2 19:34:37.850110 kubelet[1417]: I1002 19:34:37.849940 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvcn5\" (UniqueName: \"kubernetes.io/projected/f76574a0-7e88-4153-b749-73771b95d06c-kube-api-access-bvcn5\") pod \"cilium-operator-574c4bb98d-wdgvt\" (UID: \"f76574a0-7e88-4153-b749-73771b95d06c\") " pod="kube-system/cilium-operator-574c4bb98d-wdgvt" Oct 2 19:34:37.850110 kubelet[1417]: I1002 19:34:37.849955 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-run\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.850110 kubelet[1417]: I1002 19:34:37.849970 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-bpf-maps\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.850110 kubelet[1417]: I1002 19:34:37.849991 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-host-proc-sys-net\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.850110 kubelet[1417]: I1002 19:34:37.850013 1417 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-host-proc-sys-kernel\") pod \"cilium-jq6dc\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " pod="kube-system/cilium-jq6dc" Oct 2 19:34:37.987019 kubelet[1417]: E1002 19:34:37.986987 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:38.089734 kubelet[1417]: E1002 19:34:38.089664 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:38.090211 env[1112]: time="2023-10-02T19:34:38.090168724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-wdgvt,Uid:f76574a0-7e88-4153-b749-73771b95d06c,Namespace:kube-system,Attempt:0,}" Oct 2 19:34:38.102546 env[1112]: time="2023-10-02T19:34:38.101290846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:34:38.102546 env[1112]: time="2023-10-02T19:34:38.101325020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:34:38.102546 env[1112]: time="2023-10-02T19:34:38.101338646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:34:38.102546 env[1112]: time="2023-10-02T19:34:38.101495301Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565 pid=1970 runtime=io.containerd.runc.v2 Oct 2 19:34:38.113779 systemd[1]: Started cri-containerd-b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565.scope. Oct 2 19:34:38.122920 kubelet[1417]: E1002 19:34:38.122462 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:38.123085 env[1112]: time="2023-10-02T19:34:38.122923925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jq6dc,Uid:a87e0494-243f-410e-874d-9f2966aa9c2b,Namespace:kube-system,Attempt:0,}" Oct 2 19:34:38.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.129802 kernel: audit: type=1400 audit(1696275278.124:646): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.129848 kernel: audit: type=1400 audit(1696275278.124:647): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.129866 kernel: audit: type=1400 audit(1696275278.124:648): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.135539 kernel: audit: type=1400 audit(1696275278.124:649): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.135606 kernel: audit: type=1400 audit(1696275278.124:650): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.138823 kernel: audit: type=1400 audit(1696275278.124:651): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.139623 kernel: audit: type=1400 audit(1696275278.124:652): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.143316 kernel: audit: type=1400 audit(1696275278.124:653): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit: BPF prog-id=75 op=LOAD Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=1970 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231633334636663313731346366396431633563336134623365313737 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00014d6b0 a2=3c a3=c items=0 ppid=1970 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231633334636663313731346366396431633563336134623365313737 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.128000 audit: BPF prog-id=76 op=LOAD Oct 2 19:34:38.128000 audit[1980]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014d9d8 a2=78 a3=c0001d2630 items=0 ppid=1970 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231633334636663313731346366396431633563336134623365313737 Oct 2 19:34:38.130000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.130000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.130000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.130000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.130000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.130000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.130000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.130000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.130000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.130000 audit: BPF prog-id=77 op=LOAD Oct 2 19:34:38.130000 audit[1980]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00014d770 a2=78 a3=c0001d2678 items=0 ppid=1970 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231633334636663313731346366396431633563336134623365313737 Oct 2 19:34:38.133000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:34:38.133000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { perfmon } for pid=1980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit[1980]: AVC avc: denied { bpf } for pid=1980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.133000 audit: BPF prog-id=78 op=LOAD Oct 2 19:34:38.133000 audit[1980]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014dc30 a2=78 a3=c0001d2a88 items=0 ppid=1970 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231633334636663313731346366396431633563336134623365313737 Oct 2 19:34:38.145486 env[1112]: time="2023-10-02T19:34:38.141483917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:34:38.145486 env[1112]: time="2023-10-02T19:34:38.143406745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:34:38.145486 env[1112]: time="2023-10-02T19:34:38.143429938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:34:38.147268 env[1112]: time="2023-10-02T19:34:38.146567968Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c pid=2003 runtime=io.containerd.runc.v2 Oct 2 19:34:38.155862 systemd[1]: Started cri-containerd-5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c.scope. Oct 2 19:34:38.164628 env[1112]: time="2023-10-02T19:34:38.164564106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-wdgvt,Uid:f76574a0-7e88-4153-b749-73771b95d06c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565\"" Oct 2 19:34:38.165180 kubelet[1417]: E1002 19:34:38.165160 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:38.166323 env[1112]: time="2023-10-02T19:34:38.166286245Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:34:38.164000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.164000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.164000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.164000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.164000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.164000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.164000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.164000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.164000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit: BPF prog-id=79 op=LOAD Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=2003 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653464653639353030303561373738633139303161376534613865 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=2003 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653464653639353030303561373738633139303161376534613865 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit: BPF prog-id=80 op=LOAD Oct 2 19:34:38.165000 audit[2013]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c000024460 items=0 ppid=2003 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653464653639353030303561373738633139303161376534613865 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit: BPF prog-id=81 op=LOAD Oct 2 19:34:38.165000 audit[2013]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c0000244a8 items=0 ppid=2003 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653464653639353030303561373738633139303161376534613865 Oct 2 19:34:38.165000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:34:38.165000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { perfmon } for pid=2013 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit[2013]: AVC avc: denied { bpf } for pid=2013 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:38.165000 audit: BPF prog-id=82 op=LOAD Oct 2 19:34:38.165000 audit[2013]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c0000248b8 items=0 ppid=2003 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:38.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653464653639353030303561373738633139303161376534613865 Oct 2 19:34:38.179277 env[1112]: time="2023-10-02T19:34:38.179230033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jq6dc,Uid:a87e0494-243f-410e-874d-9f2966aa9c2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\"" Oct 2 19:34:38.179984 kubelet[1417]: E1002 19:34:38.179965 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:38.181649 env[1112]: time="2023-10-02T19:34:38.181613378Z" level=info msg="CreateContainer within sandbox \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:34:38.194749 env[1112]: time="2023-10-02T19:34:38.194702881Z" level=info msg="CreateContainer within sandbox \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\"" Oct 2 19:34:38.195100 env[1112]: time="2023-10-02T19:34:38.195057651Z" level=info msg="StartContainer for \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\"" Oct 2 19:34:38.207163 systemd[1]: Started cri-containerd-e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467.scope. Oct 2 19:34:38.217029 systemd[1]: cri-containerd-e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467.scope: Deactivated successfully. Oct 2 19:34:38.217272 systemd[1]: Stopped cri-containerd-e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467.scope. Oct 2 19:34:38.233965 env[1112]: time="2023-10-02T19:34:38.233916365Z" level=info msg="shim disconnected" id=e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467 Oct 2 19:34:38.233965 env[1112]: time="2023-10-02T19:34:38.233964054Z" level=warning msg="cleaning up after shim disconnected" id=e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467 namespace=k8s.io Oct 2 19:34:38.234163 env[1112]: time="2023-10-02T19:34:38.233972881Z" level=info msg="cleaning up dead shim" Oct 2 19:34:38.240748 env[1112]: time="2023-10-02T19:34:38.240721793Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2069 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:38.241159 env[1112]: time="2023-10-02T19:34:38.241022760Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:34:38.242625 env[1112]: time="2023-10-02T19:34:38.242581892Z" level=error msg="Failed to pipe stdout of container \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\"" error="reading from a closed fifo" Oct 2 19:34:38.243596 env[1112]: time="2023-10-02T19:34:38.243558143Z" level=error msg="Failed to pipe stderr of container \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\"" error="reading from a closed fifo" Oct 2 19:34:38.245951 env[1112]: time="2023-10-02T19:34:38.245886816Z" level=error msg="StartContainer for \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:38.246206 kubelet[1417]: E1002 19:34:38.246174 1417 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467" Oct 2 19:34:38.246307 kubelet[1417]: E1002 19:34:38.246289 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:38.246307 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:38.246307 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:34:38.246307 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-psmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:38.246570 kubelet[1417]: E1002 19:34:38.246328 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jq6dc" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b Oct 2 19:34:38.350826 kubelet[1417]: E1002 19:34:38.350710 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:38.641447 kubelet[1417]: E1002 19:34:38.641348 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:38.642913 env[1112]: time="2023-10-02T19:34:38.642876720Z" level=info msg="CreateContainer within sandbox \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:34:38.656739 env[1112]: time="2023-10-02T19:34:38.656689378Z" level=info msg="CreateContainer within sandbox \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\"" Oct 2 19:34:38.657098 env[1112]: time="2023-10-02T19:34:38.657066930Z" level=info msg="StartContainer for \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\"" Oct 2 19:34:38.670856 systemd[1]: Started cri-containerd-9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21.scope. Oct 2 19:34:38.679567 systemd[1]: cri-containerd-9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21.scope: Deactivated successfully. Oct 2 19:34:38.679854 systemd[1]: Stopped cri-containerd-9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21.scope. Oct 2 19:34:38.687249 env[1112]: time="2023-10-02T19:34:38.687176128Z" level=info msg="shim disconnected" id=9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21 Oct 2 19:34:38.687249 env[1112]: time="2023-10-02T19:34:38.687233156Z" level=warning msg="cleaning up after shim disconnected" id=9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21 namespace=k8s.io Oct 2 19:34:38.687249 env[1112]: time="2023-10-02T19:34:38.687242103Z" level=info msg="cleaning up dead shim" Oct 2 19:34:38.693233 env[1112]: time="2023-10-02T19:34:38.693200133Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2105 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:38.693463 env[1112]: time="2023-10-02T19:34:38.693418594Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Oct 2 19:34:38.693674 env[1112]: time="2023-10-02T19:34:38.693630284Z" level=error msg="Failed to pipe stderr of container \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\"" error="reading from a closed fifo" Oct 2 19:34:38.693674 env[1112]: time="2023-10-02T19:34:38.693604225Z" level=error msg="Failed to pipe stdout of container \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\"" error="reading from a closed fifo" Oct 2 19:34:38.695818 env[1112]: time="2023-10-02T19:34:38.695762406Z" level=error msg="StartContainer for \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:38.696031 kubelet[1417]: E1002 19:34:38.696007 1417 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21" Oct 2 19:34:38.696128 kubelet[1417]: E1002 19:34:38.696119 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:38.696128 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:38.696128 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:34:38.696128 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-psmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:38.696259 kubelet[1417]: E1002 19:34:38.696155 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jq6dc" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b Oct 2 19:34:38.987891 kubelet[1417]: E1002 19:34:38.987843 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:39.324615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923112136.mount: Deactivated successfully. Oct 2 19:34:39.644291 kubelet[1417]: I1002 19:34:39.643970 1417 scope.go:115] "RemoveContainer" containerID="e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467" Oct 2 19:34:39.644433 kubelet[1417]: I1002 19:34:39.644406 1417 scope.go:115] "RemoveContainer" containerID="e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467" Oct 2 19:34:39.645012 env[1112]: time="2023-10-02T19:34:39.644959335Z" level=info msg="RemoveContainer for \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\"" Oct 2 19:34:39.645668 env[1112]: time="2023-10-02T19:34:39.645615593Z" level=info msg="RemoveContainer for \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\"" Oct 2 19:34:39.645822 env[1112]: time="2023-10-02T19:34:39.645772057Z" level=error msg="RemoveContainer for \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\" failed" error="failed to set removing state for container \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\": container is already in removing state" Oct 2 19:34:39.645936 kubelet[1417]: E1002 19:34:39.645923 1417 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\": container is already in removing state" containerID="e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467" Oct 2 19:34:39.645987 kubelet[1417]: E1002 19:34:39.645959 1417 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467": container is already in removing state; Skipping pod "cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b)" Oct 2 19:34:39.646023 kubelet[1417]: E1002 19:34:39.646015 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:39.646247 kubelet[1417]: E1002 19:34:39.646232 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b)\"" pod="kube-system/cilium-jq6dc" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b Oct 2 19:34:39.647645 env[1112]: time="2023-10-02T19:34:39.647620705Z" level=info msg="RemoveContainer for \"e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467\" returns successfully" Oct 2 19:34:39.988220 kubelet[1417]: E1002 19:34:39.988168 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:40.169447 env[1112]: time="2023-10-02T19:34:40.169377185Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:34:40.171672 env[1112]: time="2023-10-02T19:34:40.171609245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:34:40.173405 env[1112]: time="2023-10-02T19:34:40.173353516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:34:40.173839 env[1112]: time="2023-10-02T19:34:40.173793976Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 2 19:34:40.175602 env[1112]: time="2023-10-02T19:34:40.175577220Z" level=info msg="CreateContainer within sandbox \"b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:34:40.188336 env[1112]: time="2023-10-02T19:34:40.188295861Z" level=info msg="CreateContainer within sandbox \"b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\"" Oct 2 19:34:40.188637 env[1112]: time="2023-10-02T19:34:40.188606607Z" level=info msg="StartContainer for \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\"" Oct 2 19:34:40.204443 systemd[1]: Started cri-containerd-57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a.scope. Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.210000 audit: BPF prog-id=83 op=LOAD Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1970 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:40.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343736383636393730626138626335623936373761383534383733 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1970 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:40.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343736383636393730626138626335623936373761383534383733 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit: BPF prog-id=84 op=LOAD Oct 2 19:34:40.211000 audit[2124]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00032b040 items=0 ppid=1970 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:40.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343736383636393730626138626335623936373761383534383733 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit: BPF prog-id=85 op=LOAD Oct 2 19:34:40.211000 audit[2124]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00032b088 items=0 ppid=1970 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:40.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343736383636393730626138626335623936373761383534383733 Oct 2 19:34:40.211000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:34:40.211000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { perfmon } for pid=2124 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit[2124]: AVC avc: denied { bpf } for pid=2124 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:40.211000 audit: BPF prog-id=86 op=LOAD Oct 2 19:34:40.211000 audit[2124]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00032b498 items=0 ppid=1970 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:40.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537343736383636393730626138626335623936373761383534383733 Oct 2 19:34:40.223537 env[1112]: time="2023-10-02T19:34:40.223471312Z" level=info msg="StartContainer for \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\" returns successfully" Oct 2 19:34:40.236000 audit[2134]: AVC avc: denied { map_create } for pid=2134 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c483,c521 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c483,c521 tclass=bpf permissive=0 Oct 2 19:34:40.240224 kernel: kauditd_printk_skb: 163 callbacks suppressed Oct 2 19:34:40.240319 kernel: audit: type=1400 audit(1696275280.236:700): avc: denied { map_create } for pid=2134 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c483,c521 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c483,c521 tclass=bpf permissive=0 Oct 2 19:34:40.236000 audit[2134]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0006517d0 a2=48 a3=c0006517c0 items=0 ppid=1970 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c483,c521 key=(null) Oct 2 19:34:40.244841 kernel: audit: type=1300 audit(1696275280.236:700): arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0006517d0 a2=48 a3=c0006517c0 items=0 ppid=1970 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c483,c521 key=(null) Oct 2 19:34:40.244904 kernel: audit: type=1327 audit(1696275280.236:700): proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:34:40.236000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:34:40.648535 kubelet[1417]: E1002 19:34:40.648399 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:40.989116 kubelet[1417]: E1002 19:34:40.989068 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:41.338575 kubelet[1417]: W1002 19:34:41.338355 1417 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda87e0494_243f_410e_874d_9f2966aa9c2b.slice/cri-containerd-e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467.scope WatchSource:0}: container "e603a5ac5bacd8b46ef70c656ad5f31ef743024fbd9631b705d5779e402e9467" in namespace "k8s.io": not found Oct 2 19:34:41.650636 kubelet[1417]: E1002 19:34:41.650495 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:41.989401 kubelet[1417]: E1002 19:34:41.989332 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:42.990056 kubelet[1417]: E1002 19:34:42.989994 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:43.332422 kubelet[1417]: E1002 19:34:43.332293 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:43.351724 kubelet[1417]: E1002 19:34:43.351701 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:43.990435 kubelet[1417]: E1002 19:34:43.990386 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:44.445848 kubelet[1417]: W1002 19:34:44.445796 1417 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda87e0494_243f_410e_874d_9f2966aa9c2b.slice/cri-containerd-9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21.scope WatchSource:0}: task 9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21 not found: not found Oct 2 19:34:44.991190 kubelet[1417]: E1002 19:34:44.991141 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:45.991767 kubelet[1417]: E1002 19:34:45.991699 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:46.991928 kubelet[1417]: E1002 19:34:46.991848 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:47.865766 kubelet[1417]: E1002 19:34:47.865696 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:47.992678 kubelet[1417]: E1002 19:34:47.992642 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:48.353119 kubelet[1417]: E1002 19:34:48.353088 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:48.993150 kubelet[1417]: E1002 19:34:48.993092 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:49.993771 kubelet[1417]: E1002 19:34:49.993702 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:50.994440 kubelet[1417]: E1002 19:34:50.994382 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:51.995003 kubelet[1417]: E1002 19:34:51.994945 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:52.995628 kubelet[1417]: E1002 19:34:52.995555 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:53.354205 kubelet[1417]: E1002 19:34:53.354098 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:53.996310 kubelet[1417]: E1002 19:34:53.996248 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:54.333000 kubelet[1417]: E1002 19:34:54.332876 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:54.334960 env[1112]: time="2023-10-02T19:34:54.334913151Z" level=info msg="CreateContainer within sandbox \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:34:54.348553 kubelet[1417]: I1002 19:34:54.348496 1417 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-wdgvt" podStartSLOduration=15.340296605 podCreationTimestamp="2023-10-02 19:34:37 +0000 UTC" firstStartedPulling="2023-10-02 19:34:38.165922509 +0000 UTC m=+190.675063510" lastFinishedPulling="2023-10-02 19:34:40.174063695 +0000 UTC m=+192.683204706" observedRunningTime="2023-10-02 19:34:40.65529193 +0000 UTC m=+193.164432941" watchObservedRunningTime="2023-10-02 19:34:54.348437801 +0000 UTC m=+206.857578802" Oct 2 19:34:54.349224 env[1112]: time="2023-10-02T19:34:54.349176755Z" level=info msg="CreateContainer within sandbox \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\"" Oct 2 19:34:54.349759 env[1112]: time="2023-10-02T19:34:54.349707826Z" level=info msg="StartContainer for \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\"" Oct 2 19:34:54.366456 systemd[1]: Started cri-containerd-2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c.scope. Oct 2 19:34:54.373618 systemd[1]: cri-containerd-2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c.scope: Deactivated successfully. Oct 2 19:34:54.373838 systemd[1]: Stopped cri-containerd-2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c.scope. Oct 2 19:34:54.587767 env[1112]: time="2023-10-02T19:34:54.587613519Z" level=info msg="shim disconnected" id=2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c Oct 2 19:34:54.587767 env[1112]: time="2023-10-02T19:34:54.587670927Z" level=warning msg="cleaning up after shim disconnected" id=2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c namespace=k8s.io Oct 2 19:34:54.587767 env[1112]: time="2023-10-02T19:34:54.587679363Z" level=info msg="cleaning up dead shim" Oct 2 19:34:54.594541 env[1112]: time="2023-10-02T19:34:54.594476228Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2180 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:54.594872 env[1112]: time="2023-10-02T19:34:54.594803515Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:34:54.595088 env[1112]: time="2023-10-02T19:34:54.595028509Z" level=error msg="Failed to pipe stdout of container \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\"" error="reading from a closed fifo" Oct 2 19:34:54.597615 env[1112]: time="2023-10-02T19:34:54.597576093Z" level=error msg="Failed to pipe stderr of container \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\"" error="reading from a closed fifo" Oct 2 19:34:54.599992 env[1112]: time="2023-10-02T19:34:54.599944970Z" level=error msg="StartContainer for \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:54.600249 kubelet[1417]: E1002 19:34:54.600225 1417 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c" Oct 2 19:34:54.600371 kubelet[1417]: E1002 19:34:54.600341 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:54.600371 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:54.600371 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:34:54.600371 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-psmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:54.600563 kubelet[1417]: E1002 19:34:54.600377 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jq6dc" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b Oct 2 19:34:54.671110 kubelet[1417]: I1002 19:34:54.671076 1417 scope.go:115] "RemoveContainer" containerID="9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21" Oct 2 19:34:54.671463 kubelet[1417]: I1002 19:34:54.671434 1417 scope.go:115] "RemoveContainer" containerID="9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21" Oct 2 19:34:54.672067 env[1112]: time="2023-10-02T19:34:54.672031169Z" level=info msg="RemoveContainer for \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\"" Oct 2 19:34:54.672320 env[1112]: time="2023-10-02T19:34:54.672279346Z" level=info msg="RemoveContainer for \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\"" Oct 2 19:34:54.672449 env[1112]: time="2023-10-02T19:34:54.672390566Z" level=error msg="RemoveContainer for \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\" failed" error="failed to set removing state for container \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\": container is already in removing state" Oct 2 19:34:54.672539 kubelet[1417]: E1002 19:34:54.672527 1417 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\": container is already in removing state" containerID="9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21" Oct 2 19:34:54.672581 kubelet[1417]: E1002 19:34:54.672553 1417 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21": container is already in removing state; Skipping pod "cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b)" Oct 2 19:34:54.672617 kubelet[1417]: E1002 19:34:54.672609 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:54.672853 kubelet[1417]: E1002 19:34:54.672825 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b)\"" pod="kube-system/cilium-jq6dc" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b Oct 2 19:34:54.820811 env[1112]: time="2023-10-02T19:34:54.820741372Z" level=info msg="RemoveContainer for \"9c9426213f5e9041f75268a32f188684a65324594dc402afd2f9c7c40c345c21\" returns successfully" Oct 2 19:34:54.996797 kubelet[1417]: E1002 19:34:54.996725 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:55.344728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c-rootfs.mount: Deactivated successfully. Oct 2 19:34:55.997037 kubelet[1417]: E1002 19:34:55.996985 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:56.997924 kubelet[1417]: E1002 19:34:56.997859 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:57.692923 kubelet[1417]: W1002 19:34:57.692875 1417 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda87e0494_243f_410e_874d_9f2966aa9c2b.slice/cri-containerd-2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c.scope WatchSource:0}: task 2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c not found: not found Oct 2 19:34:57.998447 kubelet[1417]: E1002 19:34:57.998301 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:58.354736 kubelet[1417]: E1002 19:34:58.354629 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:58.998856 kubelet[1417]: E1002 19:34:58.998788 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:59.999325 kubelet[1417]: E1002 19:34:59.999248 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:00.999799 kubelet[1417]: E1002 19:35:00.999729 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:02.000371 kubelet[1417]: E1002 19:35:02.000283 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:03.001530 kubelet[1417]: E1002 19:35:03.001447 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:03.356215 kubelet[1417]: E1002 19:35:03.356037 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:04.002542 kubelet[1417]: E1002 19:35:04.002461 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:05.003664 kubelet[1417]: E1002 19:35:05.003575 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:06.004107 kubelet[1417]: E1002 19:35:06.004036 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:07.005066 kubelet[1417]: E1002 19:35:07.004996 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:07.866453 kubelet[1417]: E1002 19:35:07.866383 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:08.005535 kubelet[1417]: E1002 19:35:08.005455 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:08.356875 kubelet[1417]: E1002 19:35:08.356836 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:09.006103 kubelet[1417]: E1002 19:35:09.006019 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:09.332088 kubelet[1417]: E1002 19:35:09.331952 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:09.332335 kubelet[1417]: E1002 19:35:09.332163 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b)\"" pod="kube-system/cilium-jq6dc" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b Oct 2 19:35:10.007212 kubelet[1417]: E1002 19:35:10.007149 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:11.007780 kubelet[1417]: E1002 19:35:11.007727 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:12.008910 kubelet[1417]: E1002 19:35:12.008853 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:13.009857 kubelet[1417]: E1002 19:35:13.009806 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:13.358236 kubelet[1417]: E1002 19:35:13.358134 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:14.010081 kubelet[1417]: E1002 19:35:14.010045 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:15.010869 kubelet[1417]: E1002 19:35:15.010812 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:16.011240 kubelet[1417]: E1002 19:35:16.011187 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:17.012075 kubelet[1417]: E1002 19:35:17.012016 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:18.012533 kubelet[1417]: E1002 19:35:18.012441 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:18.358868 kubelet[1417]: E1002 19:35:18.358751 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:19.013467 kubelet[1417]: E1002 19:35:19.013407 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:20.014437 kubelet[1417]: E1002 19:35:20.014395 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:21.014790 kubelet[1417]: E1002 19:35:21.014726 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:22.015468 kubelet[1417]: E1002 19:35:22.015421 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:22.332602 kubelet[1417]: E1002 19:35:22.332255 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:22.334096 env[1112]: time="2023-10-02T19:35:22.334052094Z" level=info msg="CreateContainer within sandbox \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:35:22.345327 env[1112]: time="2023-10-02T19:35:22.345284842Z" level=info msg="CreateContainer within sandbox \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b\"" Oct 2 19:35:22.345745 env[1112]: time="2023-10-02T19:35:22.345685409Z" level=info msg="StartContainer for \"4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b\"" Oct 2 19:35:22.360620 systemd[1]: Started cri-containerd-4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b.scope. Oct 2 19:35:22.367164 systemd[1]: cri-containerd-4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b.scope: Deactivated successfully. Oct 2 19:35:22.367415 systemd[1]: Stopped cri-containerd-4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b.scope. Oct 2 19:35:22.376026 env[1112]: time="2023-10-02T19:35:22.375979158Z" level=info msg="shim disconnected" id=4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b Oct 2 19:35:22.376026 env[1112]: time="2023-10-02T19:35:22.376026449Z" level=warning msg="cleaning up after shim disconnected" id=4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b namespace=k8s.io Oct 2 19:35:22.376189 env[1112]: time="2023-10-02T19:35:22.376035316Z" level=info msg="cleaning up dead shim" Oct 2 19:35:22.382013 env[1112]: time="2023-10-02T19:35:22.381967666Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2219 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:35:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:35:22.382272 env[1112]: time="2023-10-02T19:35:22.382219348Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:35:22.382426 env[1112]: time="2023-10-02T19:35:22.382380747Z" level=error msg="Failed to pipe stdout of container \"4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b\"" error="reading from a closed fifo" Oct 2 19:35:22.382426 env[1112]: time="2023-10-02T19:35:22.382391979Z" level=error msg="Failed to pipe stderr of container \"4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b\"" error="reading from a closed fifo" Oct 2 19:35:22.384611 env[1112]: time="2023-10-02T19:35:22.384571916Z" level=error msg="StartContainer for \"4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:35:22.384816 kubelet[1417]: E1002 19:35:22.384790 1417 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b" Oct 2 19:35:22.384980 kubelet[1417]: E1002 19:35:22.384893 1417 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:35:22.384980 kubelet[1417]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:35:22.384980 kubelet[1417]: rm /hostbin/cilium-mount Oct 2 19:35:22.384980 kubelet[1417]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-psmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:35:22.384980 kubelet[1417]: E1002 19:35:22.384925 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jq6dc" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b Oct 2 19:35:22.717689 kubelet[1417]: I1002 19:35:22.717646 1417 scope.go:115] "RemoveContainer" containerID="2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c" Oct 2 19:35:22.717900 kubelet[1417]: I1002 19:35:22.717885 1417 scope.go:115] "RemoveContainer" containerID="2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c" Oct 2 19:35:22.718761 env[1112]: time="2023-10-02T19:35:22.718726376Z" level=info msg="RemoveContainer for \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\"" Oct 2 19:35:22.718899 env[1112]: time="2023-10-02T19:35:22.718857717Z" level=info msg="RemoveContainer for \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\"" Oct 2 19:35:22.718969 env[1112]: time="2023-10-02T19:35:22.718943352Z" level=error msg="RemoveContainer for \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\" failed" error="failed to set removing state for container \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\": container is already in removing state" Oct 2 19:35:22.719093 kubelet[1417]: E1002 19:35:22.719069 1417 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\": container is already in removing state" containerID="2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c" Oct 2 19:35:22.719179 kubelet[1417]: E1002 19:35:22.719103 1417 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c": container is already in removing state; Skipping pod "cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b)" Oct 2 19:35:22.719233 kubelet[1417]: E1002 19:35:22.719182 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:22.719469 kubelet[1417]: E1002 19:35:22.719430 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b)\"" pod="kube-system/cilium-jq6dc" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b Oct 2 19:35:22.723145 env[1112]: time="2023-10-02T19:35:22.723117561Z" level=info msg="RemoveContainer for \"2b10155f69d414f8fe9c0006abc3ba1aafd65da820b67bbf9a8e00b09e590d1c\" returns successfully" Oct 2 19:35:23.016239 kubelet[1417]: E1002 19:35:23.016124 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:23.342196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b-rootfs.mount: Deactivated successfully. Oct 2 19:35:23.359770 kubelet[1417]: E1002 19:35:23.359734 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:24.017270 kubelet[1417]: E1002 19:35:24.017225 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:25.018295 kubelet[1417]: E1002 19:35:25.018235 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:25.481064 kubelet[1417]: W1002 19:35:25.481015 1417 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda87e0494_243f_410e_874d_9f2966aa9c2b.slice/cri-containerd-4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b.scope WatchSource:0}: task 4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b not found: not found Oct 2 19:35:26.018765 kubelet[1417]: E1002 19:35:26.018707 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:27.019722 kubelet[1417]: E1002 19:35:27.019606 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:27.865724 kubelet[1417]: E1002 19:35:27.865654 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:27.877971 env[1112]: time="2023-10-02T19:35:27.877925645Z" level=info msg="StopPodSandbox for \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\"" Oct 2 19:35:27.878515 env[1112]: time="2023-10-02T19:35:27.878446201Z" level=info msg="TearDown network for sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" successfully" Oct 2 19:35:27.878515 env[1112]: time="2023-10-02T19:35:27.878513831Z" level=info msg="StopPodSandbox for \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" returns successfully" Oct 2 19:35:27.878852 env[1112]: time="2023-10-02T19:35:27.878824606Z" level=info msg="RemovePodSandbox for \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\"" Oct 2 19:35:27.878926 env[1112]: time="2023-10-02T19:35:27.878849163Z" level=info msg="Forcibly stopping sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\"" Oct 2 19:35:27.878926 env[1112]: time="2023-10-02T19:35:27.878901754Z" level=info msg="TearDown network for sandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" successfully" Oct 2 19:35:27.881245 env[1112]: time="2023-10-02T19:35:27.881219882Z" level=info msg="RemovePodSandbox \"1837f9d35f8729fc710230f4a3c458d8053685fb30dc4c1d18e1eff15d2f5126\" returns successfully" Oct 2 19:35:28.020593 kubelet[1417]: E1002 19:35:28.020537 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:28.360447 kubelet[1417]: E1002 19:35:28.360411 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:29.021323 kubelet[1417]: E1002 19:35:29.021247 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:30.021624 kubelet[1417]: E1002 19:35:30.021569 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:31.022279 kubelet[1417]: E1002 19:35:31.022222 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:32.022902 kubelet[1417]: E1002 19:35:32.022837 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:33.023557 kubelet[1417]: E1002 19:35:33.023470 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:33.362157 kubelet[1417]: E1002 19:35:33.362024 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:34.024260 kubelet[1417]: E1002 19:35:34.024204 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:35.024833 kubelet[1417]: E1002 19:35:35.024781 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:35.332312 kubelet[1417]: E1002 19:35:35.332154 1417 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:35.332471 kubelet[1417]: E1002 19:35:35.332365 1417 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-jq6dc_kube-system(a87e0494-243f-410e-874d-9f2966aa9c2b)\"" pod="kube-system/cilium-jq6dc" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b Oct 2 19:35:36.025332 kubelet[1417]: E1002 19:35:36.025276 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:37.026313 kubelet[1417]: E1002 19:35:37.026260 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:38.027354 kubelet[1417]: E1002 19:35:38.027283 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:38.362594 kubelet[1417]: E1002 19:35:38.362485 1417 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:38.948772 env[1112]: time="2023-10-02T19:35:38.948715165Z" level=info msg="StopPodSandbox for \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\"" Oct 2 19:35:38.949181 env[1112]: time="2023-10-02T19:35:38.948783134Z" level=info msg="Container to stop \"4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:35:38.949576 env[1112]: time="2023-10-02T19:35:38.949553326Z" level=info msg="StopContainer for \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\" with timeout 30 (s)" Oct 2 19:35:38.950024 env[1112]: time="2023-10-02T19:35:38.949983768Z" level=info msg="Stop container \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\" with signal terminated" Oct 2 19:35:38.950674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c-shm.mount: Deactivated successfully. Oct 2 19:35:38.955557 systemd[1]: cri-containerd-5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c.scope: Deactivated successfully. Oct 2 19:35:38.955000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:35:38.957529 kernel: audit: type=1334 audit(1696275338.955:701): prog-id=79 op=UNLOAD Oct 2 19:35:38.964212 kernel: audit: type=1334 audit(1696275338.961:702): prog-id=82 op=UNLOAD Oct 2 19:35:38.964312 kernel: audit: type=1334 audit(1696275338.962:703): prog-id=83 op=UNLOAD Oct 2 19:35:38.961000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:35:38.962000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:35:38.963024 systemd[1]: cri-containerd-57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a.scope: Deactivated successfully. Oct 2 19:35:38.972000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:35:38.973707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c-rootfs.mount: Deactivated successfully. Oct 2 19:35:38.974525 kernel: audit: type=1334 audit(1696275338.972:704): prog-id=86 op=UNLOAD Oct 2 19:35:38.976270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a-rootfs.mount: Deactivated successfully. Oct 2 19:35:38.979517 env[1112]: time="2023-10-02T19:35:38.979445058Z" level=info msg="shim disconnected" id=5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c Oct 2 19:35:38.979517 env[1112]: time="2023-10-02T19:35:38.979490776Z" level=warning msg="cleaning up after shim disconnected" id=5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c namespace=k8s.io Oct 2 19:35:38.979517 env[1112]: time="2023-10-02T19:35:38.979498981Z" level=info msg="cleaning up dead shim" Oct 2 19:35:38.979785 env[1112]: time="2023-10-02T19:35:38.979614562Z" level=info msg="shim disconnected" id=57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a Oct 2 19:35:38.979841 env[1112]: time="2023-10-02T19:35:38.979792071Z" level=warning msg="cleaning up after shim disconnected" id=57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a namespace=k8s.io Oct 2 19:35:38.979841 env[1112]: time="2023-10-02T19:35:38.979802792Z" level=info msg="cleaning up dead shim" Oct 2 19:35:38.985968 env[1112]: time="2023-10-02T19:35:38.985936809Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2270 runtime=io.containerd.runc.v2\n" Oct 2 19:35:38.986206 env[1112]: time="2023-10-02T19:35:38.986185324Z" level=info msg="TearDown network for sandbox \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" successfully" Oct 2 19:35:38.986236 env[1112]: time="2023-10-02T19:35:38.986206444Z" level=info msg="StopPodSandbox for \"5ce4de6950005a778c1901a7e4a8efdce0c38d0a52d2c44a546dc9f52e7af27c\" returns successfully" Oct 2 19:35:38.986936 env[1112]: time="2023-10-02T19:35:38.986891002Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2271 runtime=io.containerd.runc.v2\n" Oct 2 19:35:38.989640 env[1112]: time="2023-10-02T19:35:38.989615798Z" level=info msg="StopContainer for \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\" returns successfully" Oct 2 19:35:38.990002 env[1112]: time="2023-10-02T19:35:38.989966027Z" level=info msg="StopPodSandbox for \"b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565\"" Oct 2 19:35:38.990064 env[1112]: time="2023-10-02T19:35:38.990021132Z" level=info msg="Container to stop \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:35:38.991098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565-shm.mount: Deactivated successfully. Oct 2 19:35:38.995870 systemd[1]: cri-containerd-b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565.scope: Deactivated successfully. Oct 2 19:35:38.995000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:35:38.997532 kernel: audit: type=1334 audit(1696275338.995:705): prog-id=75 op=UNLOAD Oct 2 19:35:38.999000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:35:39.000520 kernel: audit: type=1334 audit(1696275338.999:706): prog-id=78 op=UNLOAD Oct 2 19:35:39.012384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565-rootfs.mount: Deactivated successfully. Oct 2 19:35:39.015935 env[1112]: time="2023-10-02T19:35:39.015860247Z" level=info msg="shim disconnected" id=b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565 Oct 2 19:35:39.015935 env[1112]: time="2023-10-02T19:35:39.015912437Z" level=warning msg="cleaning up after shim disconnected" id=b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565 namespace=k8s.io Oct 2 19:35:39.015935 env[1112]: time="2023-10-02T19:35:39.015940190Z" level=info msg="cleaning up dead shim" Oct 2 19:35:39.022313 env[1112]: time="2023-10-02T19:35:39.022261883Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2313 runtime=io.containerd.runc.v2\n" Oct 2 19:35:39.022635 env[1112]: time="2023-10-02T19:35:39.022608235Z" level=info msg="TearDown network for sandbox \"b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565\" successfully" Oct 2 19:35:39.022635 env[1112]: time="2023-10-02T19:35:39.022633283Z" level=info msg="StopPodSandbox for \"b1c34cfc1714cf9d1c5c3a4b3e177b05de5477990898a79ce24f66fa13fa4565\" returns successfully" Oct 2 19:35:39.027601 kubelet[1417]: E1002 19:35:39.027567 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:39.053630 kubelet[1417]: I1002 19:35:39.053604 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a87e0494-243f-410e-874d-9f2966aa9c2b-hubble-tls\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053695 kubelet[1417]: I1002 19:35:39.053652 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-host-proc-sys-net\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053695 kubelet[1417]: I1002 19:35:39.053675 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-xtables-lock\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053758 kubelet[1417]: I1002 19:35:39.053706 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psmm7\" (UniqueName: \"kubernetes.io/projected/a87e0494-243f-410e-874d-9f2966aa9c2b-kube-api-access-psmm7\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053758 kubelet[1417]: I1002 19:35:39.053729 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-hostproc\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053758 kubelet[1417]: I1002 19:35:39.053747 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-ipsec-secrets\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053758 kubelet[1417]: I1002 19:35:39.053743 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.053857 kubelet[1417]: I1002 19:35:39.053764 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-run\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053857 kubelet[1417]: I1002 19:35:39.053779 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-lib-modules\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053857 kubelet[1417]: I1002 19:35:39.053794 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-bpf-maps\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053857 kubelet[1417]: I1002 19:35:39.053810 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-cgroup\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053857 kubelet[1417]: I1002 19:35:39.053829 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-config-path\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.053857 kubelet[1417]: I1002 19:35:39.053844 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-etc-cni-netd\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.054005 kubelet[1417]: I1002 19:35:39.053862 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a87e0494-243f-410e-874d-9f2966aa9c2b-clustermesh-secrets\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.054005 kubelet[1417]: I1002 19:35:39.053877 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cni-path\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.054005 kubelet[1417]: I1002 19:35:39.053895 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-host-proc-sys-kernel\") pod \"a87e0494-243f-410e-874d-9f2966aa9c2b\" (UID: \"a87e0494-243f-410e-874d-9f2966aa9c2b\") " Oct 2 19:35:39.054005 kubelet[1417]: I1002 19:35:39.053938 1417 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-host-proc-sys-net\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.054005 kubelet[1417]: I1002 19:35:39.053966 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.054005 kubelet[1417]: I1002 19:35:39.053970 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.054005 kubelet[1417]: I1002 19:35:39.053990 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-hostproc" (OuterVolumeSpecName: "hostproc") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.054165 kubelet[1417]: I1002 19:35:39.054004 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.054407 kubelet[1417]: W1002 19:35:39.054183 1417 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a87e0494-243f-410e-874d-9f2966aa9c2b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:35:39.054407 kubelet[1417]: I1002 19:35:39.054249 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.054407 kubelet[1417]: I1002 19:35:39.054281 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.054407 kubelet[1417]: I1002 19:35:39.054303 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.054407 kubelet[1417]: I1002 19:35:39.054324 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.054407 kubelet[1417]: I1002 19:35:39.054345 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cni-path" (OuterVolumeSpecName: "cni-path") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:39.055993 kubelet[1417]: I1002 19:35:39.055967 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:35:39.056717 kubelet[1417]: I1002 19:35:39.056680 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87e0494-243f-410e-874d-9f2966aa9c2b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:39.057202 kubelet[1417]: I1002 19:35:39.057172 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:35:39.057415 kubelet[1417]: I1002 19:35:39.057391 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87e0494-243f-410e-874d-9f2966aa9c2b-kube-api-access-psmm7" (OuterVolumeSpecName: "kube-api-access-psmm7") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "kube-api-access-psmm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:39.057810 kubelet[1417]: I1002 19:35:39.057778 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87e0494-243f-410e-874d-9f2966aa9c2b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a87e0494-243f-410e-874d-9f2966aa9c2b" (UID: "a87e0494-243f-410e-874d-9f2966aa9c2b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:35:39.154090 kubelet[1417]: I1002 19:35:39.154066 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvcn5\" (UniqueName: \"kubernetes.io/projected/f76574a0-7e88-4153-b749-73771b95d06c-kube-api-access-bvcn5\") pod \"f76574a0-7e88-4153-b749-73771b95d06c\" (UID: \"f76574a0-7e88-4153-b749-73771b95d06c\") " Oct 2 19:35:39.154158 kubelet[1417]: I1002 19:35:39.154099 1417 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f76574a0-7e88-4153-b749-73771b95d06c-cilium-config-path\") pod \"f76574a0-7e88-4153-b749-73771b95d06c\" (UID: \"f76574a0-7e88-4153-b749-73771b95d06c\") " Oct 2 19:35:39.154158 kubelet[1417]: I1002 19:35:39.154130 1417 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a87e0494-243f-410e-874d-9f2966aa9c2b-hubble-tls\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154158 kubelet[1417]: I1002 19:35:39.154144 1417 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-ipsec-secrets\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154158 kubelet[1417]: I1002 19:35:39.154155 1417 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-run\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154266 kubelet[1417]: I1002 19:35:39.154167 1417 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-xtables-lock\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154266 kubelet[1417]: I1002 19:35:39.154180 1417 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-psmm7\" (UniqueName: \"kubernetes.io/projected/a87e0494-243f-410e-874d-9f2966aa9c2b-kube-api-access-psmm7\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154266 kubelet[1417]: I1002 19:35:39.154195 1417 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-hostproc\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154266 kubelet[1417]: I1002 19:35:39.154208 1417 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-etc-cni-netd\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154266 kubelet[1417]: I1002 19:35:39.154219 1417 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-lib-modules\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154266 kubelet[1417]: I1002 19:35:39.154231 1417 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-bpf-maps\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154266 kubelet[1417]: I1002 19:35:39.154242 1417 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-cgroup\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154266 kubelet[1417]: I1002 19:35:39.154253 1417 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a87e0494-243f-410e-874d-9f2966aa9c2b-cilium-config-path\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154266 kubelet[1417]: I1002 19:35:39.154265 1417 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-host-proc-sys-kernel\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154472 kubelet[1417]: I1002 19:35:39.154278 1417 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a87e0494-243f-410e-874d-9f2966aa9c2b-clustermesh-secrets\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154472 kubelet[1417]: I1002 19:35:39.154290 1417 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a87e0494-243f-410e-874d-9f2966aa9c2b-cni-path\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.154472 kubelet[1417]: W1002 19:35:39.154361 1417 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f76574a0-7e88-4153-b749-73771b95d06c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:35:39.156709 kubelet[1417]: I1002 19:35:39.156684 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76574a0-7e88-4153-b749-73771b95d06c-kube-api-access-bvcn5" (OuterVolumeSpecName: "kube-api-access-bvcn5") pod "f76574a0-7e88-4153-b749-73771b95d06c" (UID: "f76574a0-7e88-4153-b749-73771b95d06c"). InnerVolumeSpecName "kube-api-access-bvcn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:39.156839 kubelet[1417]: I1002 19:35:39.156810 1417 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f76574a0-7e88-4153-b749-73771b95d06c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f76574a0-7e88-4153-b749-73771b95d06c" (UID: "f76574a0-7e88-4153-b749-73771b95d06c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:35:39.255142 kubelet[1417]: I1002 19:35:39.255021 1417 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f76574a0-7e88-4153-b749-73771b95d06c-cilium-config-path\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.255142 kubelet[1417]: I1002 19:35:39.255050 1417 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bvcn5\" (UniqueName: \"kubernetes.io/projected/f76574a0-7e88-4153-b749-73771b95d06c-kube-api-access-bvcn5\") on node \"10.0.0.14\" DevicePath \"\"" Oct 2 19:35:39.743388 kubelet[1417]: I1002 19:35:39.743348 1417 scope.go:115] "RemoveContainer" containerID="57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a" Oct 2 19:35:39.744436 env[1112]: time="2023-10-02T19:35:39.744398546Z" level=info msg="RemoveContainer for \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\"" Oct 2 19:35:39.747164 systemd[1]: Removed slice kubepods-besteffort-podf76574a0_7e88_4153_b749_73771b95d06c.slice. Oct 2 19:35:39.747683 env[1112]: time="2023-10-02T19:35:39.747642011Z" level=info msg="RemoveContainer for \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\" returns successfully" Oct 2 19:35:39.747851 kubelet[1417]: I1002 19:35:39.747809 1417 scope.go:115] "RemoveContainer" containerID="57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a" Oct 2 19:35:39.748103 env[1112]: time="2023-10-02T19:35:39.748006547Z" level=error msg="ContainerStatus for \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\": not found" Oct 2 19:35:39.748272 kubelet[1417]: E1002 19:35:39.748239 1417 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\": not found" containerID="57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a" Oct 2 19:35:39.748471 kubelet[1417]: I1002 19:35:39.748287 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a} err="failed to get container status \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\": rpc error: code = NotFound desc = an error occurred when try to find container \"57476866970ba8bc5b9677a854873c8f2d70c47c6ff0844f4f1aee36e054160a\": not found" Oct 2 19:35:39.748471 kubelet[1417]: I1002 19:35:39.748304 1417 scope.go:115] "RemoveContainer" containerID="4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b" Oct 2 19:35:39.748773 systemd[1]: Removed slice kubepods-burstable-poda87e0494_243f_410e_874d_9f2966aa9c2b.slice. Oct 2 19:35:39.749340 env[1112]: time="2023-10-02T19:35:39.749310698Z" level=info msg="RemoveContainer for \"4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b\"" Oct 2 19:35:39.751872 env[1112]: time="2023-10-02T19:35:39.751836864Z" level=info msg="RemoveContainer for \"4ffb3171aae21836ee433cce3ac09b43e6fa541dbeb0b2990878f001f447158b\" returns successfully" Oct 2 19:35:39.950645 systemd[1]: var-lib-kubelet-pods-a87e0494\x2d243f\x2d410e\x2d874d\x2d9f2966aa9c2b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsmm7.mount: Deactivated successfully. Oct 2 19:35:39.950744 systemd[1]: var-lib-kubelet-pods-f76574a0\x2d7e88\x2d4153\x2db749\x2d73771b95d06c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbvcn5.mount: Deactivated successfully. Oct 2 19:35:39.950797 systemd[1]: var-lib-kubelet-pods-a87e0494\x2d243f\x2d410e\x2d874d\x2d9f2966aa9c2b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:35:39.950850 systemd[1]: var-lib-kubelet-pods-a87e0494\x2d243f\x2d410e\x2d874d\x2d9f2966aa9c2b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:35:39.950905 systemd[1]: var-lib-kubelet-pods-a87e0494\x2d243f\x2d410e\x2d874d\x2d9f2966aa9c2b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:35:40.028386 kubelet[1417]: E1002 19:35:40.028300 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:40.333686 kubelet[1417]: I1002 19:35:40.333548 1417 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a87e0494-243f-410e-874d-9f2966aa9c2b path="/var/lib/kubelet/pods/a87e0494-243f-410e-874d-9f2966aa9c2b/volumes" Oct 2 19:35:40.333933 kubelet[1417]: I1002 19:35:40.333911 1417 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f76574a0-7e88-4153-b749-73771b95d06c path="/var/lib/kubelet/pods/f76574a0-7e88-4153-b749-73771b95d06c/volumes"