Oct 2 19:17:38.013082 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:17:38.013114 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:17:38.013127 kernel: BIOS-provided physical RAM map: Oct 2 19:17:38.013136 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 2 19:17:38.013143 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 2 19:17:38.013151 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 2 19:17:38.013161 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 2 19:17:38.013169 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 2 19:17:38.013177 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 2 19:17:38.013186 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 2 19:17:38.013194 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 2 19:17:38.013202 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 2 19:17:38.013210 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 2 19:17:38.013218 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 2 19:17:38.013229 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 2 19:17:38.013239 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 2 19:17:38.013247 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 2 19:17:38.013256 kernel: NX (Execute Disable) protection: active Oct 2 19:17:38.013264 kernel: e820: update [mem 0x9b404018-0x9b40dc57] usable ==> usable Oct 2 19:17:38.013273 kernel: e820: update [mem 0x9b404018-0x9b40dc57] usable ==> usable Oct 2 19:17:38.013281 kernel: e820: update [mem 0x9b3c7018-0x9b403e57] usable ==> usable Oct 2 19:17:38.013290 kernel: e820: update [mem 0x9b3c7018-0x9b403e57] usable ==> usable Oct 2 19:17:38.013298 kernel: extended physical RAM map: Oct 2 19:17:38.013306 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 2 19:17:38.013315 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 2 19:17:38.013335 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 2 19:17:38.013344 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 2 19:17:38.013362 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 2 19:17:38.013376 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 2 19:17:38.013385 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 2 19:17:38.013393 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b3c7017] usable Oct 2 19:17:38.013401 kernel: reserve setup_data: [mem 0x000000009b3c7018-0x000000009b403e57] usable Oct 2 19:17:38.013414 kernel: reserve setup_data: [mem 0x000000009b403e58-0x000000009b404017] usable Oct 2 19:17:38.013427 kernel: reserve setup_data: [mem 0x000000009b404018-0x000000009b40dc57] usable Oct 2 19:17:38.013435 kernel: reserve setup_data: [mem 0x000000009b40dc58-0x000000009c8eefff] usable Oct 2 19:17:38.013444 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 2 19:17:38.013455 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 2 19:17:38.013463 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 2 19:17:38.013477 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 2 19:17:38.013485 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 2 19:17:38.013509 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 2 19:17:38.013535 kernel: efi: EFI v2.70 by EDK II Oct 2 19:17:38.013556 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Oct 2 19:17:38.013581 kernel: random: crng init done Oct 2 19:17:38.013597 kernel: SMBIOS 2.8 present. Oct 2 19:17:38.013618 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Oct 2 19:17:38.013652 kernel: Hypervisor detected: KVM Oct 2 19:17:38.013666 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:17:38.013690 kernel: kvm-clock: cpu 0, msr 4bf8a001, primary cpu clock Oct 2 19:17:38.013706 kernel: kvm-clock: using sched offset of 5478564030 cycles Oct 2 19:17:38.013749 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:17:38.013771 kernel: tsc: Detected 2794.748 MHz processor Oct 2 19:17:38.013808 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:17:38.013836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:17:38.013859 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 2 19:17:38.013890 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:17:38.013912 kernel: Using GB pages for direct mapping Oct 2 19:17:38.013921 kernel: Secure boot disabled Oct 2 19:17:38.013931 kernel: ACPI: Early table checksum verification disabled Oct 2 19:17:38.013940 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 2 19:17:38.013956 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:17:38.013973 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:17:38.013993 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:17:38.014003 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 2 19:17:38.014013 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:17:38.014030 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:17:38.014040 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:17:38.014052 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 2 19:17:38.014065 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Oct 2 19:17:38.014082 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Oct 2 19:17:38.014095 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 2 19:17:38.014104 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Oct 2 19:17:38.014113 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Oct 2 19:17:38.014129 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Oct 2 19:17:38.014139 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Oct 2 19:17:38.014148 kernel: No NUMA configuration found Oct 2 19:17:38.014158 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 2 19:17:38.014167 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 2 19:17:38.014176 kernel: Zone ranges: Oct 2 19:17:38.014188 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:17:38.014198 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 2 19:17:38.014207 kernel: Normal empty Oct 2 19:17:38.014216 kernel: Movable zone start for each node Oct 2 19:17:38.014225 kernel: Early memory node ranges Oct 2 19:17:38.014234 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 2 19:17:38.014243 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 2 19:17:38.014253 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 2 19:17:38.014262 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 2 19:17:38.014273 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 2 19:17:38.014282 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 2 19:17:38.014292 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 2 19:17:38.014301 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:17:38.014310 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 2 19:17:38.014319 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 2 19:17:38.014329 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:17:38.014338 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 2 19:17:38.014347 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 2 19:17:38.014358 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 2 19:17:38.014368 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 2 19:17:38.014377 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:17:38.014386 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:17:38.014396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:17:38.014405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:17:38.014414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:17:38.014424 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:17:38.014433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:17:38.014444 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:17:38.014453 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:17:38.014462 kernel: TSC deadline timer available Oct 2 19:17:38.014472 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 2 19:17:38.014481 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 2 19:17:38.014490 kernel: kvm-guest: setup PV sched yield Oct 2 19:17:38.014509 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Oct 2 19:17:38.014521 kernel: Booting paravirtualized kernel on KVM Oct 2 19:17:38.014531 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:17:38.014541 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 2 19:17:38.014553 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Oct 2 19:17:38.014567 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Oct 2 19:17:38.014590 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 2 19:17:38.014602 kernel: kvm-guest: setup async PF for cpu 0 Oct 2 19:17:38.014611 kernel: kvm-guest: stealtime: cpu 0, msr 9ba1c0c0 Oct 2 19:17:38.014621 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:17:38.014637 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:17:38.014646 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 2 19:17:38.014655 kernel: Policy zone: DMA32 Oct 2 19:17:38.014666 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:17:38.014675 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:17:38.014687 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:17:38.014696 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:17:38.014706 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:17:38.014727 kernel: Memory: 2405480K/2567000K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 161260K reserved, 0K cma-reserved) Oct 2 19:17:38.014740 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:17:38.014750 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:17:38.014759 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:17:38.014768 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:17:38.014778 kernel: rcu: RCU event tracing is enabled. Oct 2 19:17:38.014788 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:17:38.014798 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:17:38.014808 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:17:38.014818 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:17:38.014835 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:17:38.014850 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 2 19:17:38.014865 kernel: Console: colour dummy device 80x25 Oct 2 19:17:38.014879 kernel: printk: console [ttyS0] enabled Oct 2 19:17:38.014889 kernel: ACPI: Core revision 20210730 Oct 2 19:17:38.014899 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 2 19:17:38.014909 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:17:38.014918 kernel: x2apic enabled Oct 2 19:17:38.014928 kernel: Switched APIC routing to physical x2apic. Oct 2 19:17:38.014938 kernel: kvm-guest: setup PV IPIs Oct 2 19:17:38.014950 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:17:38.014960 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:17:38.014970 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 2 19:17:38.014980 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 2 19:17:38.014990 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 2 19:17:38.015000 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 2 19:17:38.015010 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:17:38.015019 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:17:38.015031 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:17:38.015041 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:17:38.015051 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 2 19:17:38.015061 kernel: RETBleed: Mitigation: untrained return thunk Oct 2 19:17:38.015071 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:17:38.015081 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:17:38.015091 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:17:38.015101 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:17:38.015114 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:17:38.015126 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:17:38.015136 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:17:38.015144 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:17:38.015167 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:17:38.015181 kernel: LSM: Security Framework initializing Oct 2 19:17:38.015190 kernel: SELinux: Initializing. Oct 2 19:17:38.015205 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:17:38.015214 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:17:38.015223 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 2 19:17:38.015237 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 2 19:17:38.015246 kernel: ... version: 0 Oct 2 19:17:38.015254 kernel: ... bit width: 48 Oct 2 19:17:38.015263 kernel: ... generic registers: 6 Oct 2 19:17:38.015272 kernel: ... value mask: 0000ffffffffffff Oct 2 19:17:38.015282 kernel: ... max period: 00007fffffffffff Oct 2 19:17:38.015292 kernel: ... fixed-purpose events: 0 Oct 2 19:17:38.015301 kernel: ... event mask: 000000000000003f Oct 2 19:17:38.015311 kernel: signal: max sigframe size: 1776 Oct 2 19:17:38.015323 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:17:38.015333 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:17:38.015342 kernel: x86: Booting SMP configuration: Oct 2 19:17:38.015352 kernel: .... node #0, CPUs: #1 Oct 2 19:17:38.015361 kernel: kvm-clock: cpu 1, msr 4bf8a041, secondary cpu clock Oct 2 19:17:38.015370 kernel: kvm-guest: setup async PF for cpu 1 Oct 2 19:17:38.015380 kernel: kvm-guest: stealtime: cpu 1, msr 9ba9c0c0 Oct 2 19:17:38.015389 kernel: #2 Oct 2 19:17:38.015399 kernel: kvm-clock: cpu 2, msr 4bf8a081, secondary cpu clock Oct 2 19:17:38.015409 kernel: kvm-guest: setup async PF for cpu 2 Oct 2 19:17:38.015420 kernel: kvm-guest: stealtime: cpu 2, msr 9bb1c0c0 Oct 2 19:17:38.015430 kernel: #3 Oct 2 19:17:38.015439 kernel: kvm-clock: cpu 3, msr 4bf8a0c1, secondary cpu clock Oct 2 19:17:38.015449 kernel: kvm-guest: setup async PF for cpu 3 Oct 2 19:17:38.015458 kernel: kvm-guest: stealtime: cpu 3, msr 9bb9c0c0 Oct 2 19:17:38.015468 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:17:38.015477 kernel: smpboot: Max logical packages: 1 Oct 2 19:17:38.015487 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 2 19:17:38.015496 kernel: devtmpfs: initialized Oct 2 19:17:38.015508 kernel: x86/mm: Memory block size: 128MB Oct 2 19:17:38.015518 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 2 19:17:38.015527 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 2 19:17:38.015537 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 2 19:17:38.015546 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 2 19:17:38.015556 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 2 19:17:38.015566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:17:38.015581 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:17:38.015596 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:17:38.015608 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:17:38.015618 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:17:38.015641 kernel: audit: type=2000 audit(1696274257.452:1): state=initialized audit_enabled=0 res=1 Oct 2 19:17:38.015656 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:17:38.015665 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:17:38.015677 kernel: cpuidle: using governor menu Oct 2 19:17:38.015689 kernel: ACPI: bus type PCI registered Oct 2 19:17:38.015701 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:17:38.015714 kernel: dca service started, version 1.12.1 Oct 2 19:17:38.015743 kernel: PCI: Using configuration type 1 for base access Oct 2 19:17:38.015757 kernel: PCI: Using configuration type 1 for extended access Oct 2 19:17:38.015768 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:17:38.015778 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:17:38.015791 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:17:38.015811 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:17:38.015826 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:17:38.015840 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:17:38.015857 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:17:38.015877 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:17:38.015893 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:17:38.015907 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:17:38.015920 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:17:38.015946 kernel: ACPI: Interpreter enabled Oct 2 19:17:38.015971 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:17:38.015984 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:17:38.016001 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:17:38.016020 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:17:38.016036 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:17:38.016360 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:17:38.016390 kernel: acpiphp: Slot [3] registered Oct 2 19:17:38.016407 kernel: acpiphp: Slot [4] registered Oct 2 19:17:38.016424 kernel: acpiphp: Slot [5] registered Oct 2 19:17:38.016440 kernel: acpiphp: Slot [6] registered Oct 2 19:17:38.016455 kernel: acpiphp: Slot [7] registered Oct 2 19:17:38.016469 kernel: acpiphp: Slot [8] registered Oct 2 19:17:38.016483 kernel: acpiphp: Slot [9] registered Oct 2 19:17:38.016492 kernel: acpiphp: Slot [10] registered Oct 2 19:17:38.016502 kernel: acpiphp: Slot [11] registered Oct 2 19:17:38.016517 kernel: acpiphp: Slot [12] registered Oct 2 19:17:38.016527 kernel: acpiphp: Slot [13] registered Oct 2 19:17:38.016537 kernel: acpiphp: Slot [14] registered Oct 2 19:17:38.016552 kernel: acpiphp: Slot [15] registered Oct 2 19:17:38.016564 kernel: acpiphp: Slot [16] registered Oct 2 19:17:38.016577 kernel: acpiphp: Slot [17] registered Oct 2 19:17:38.016587 kernel: acpiphp: Slot [18] registered Oct 2 19:17:38.016599 kernel: acpiphp: Slot [19] registered Oct 2 19:17:38.016608 kernel: acpiphp: Slot [20] registered Oct 2 19:17:38.016618 kernel: acpiphp: Slot [21] registered Oct 2 19:17:38.016634 kernel: acpiphp: Slot [22] registered Oct 2 19:17:38.016644 kernel: acpiphp: Slot [23] registered Oct 2 19:17:38.016653 kernel: acpiphp: Slot [24] registered Oct 2 19:17:38.016663 kernel: acpiphp: Slot [25] registered Oct 2 19:17:38.016675 kernel: acpiphp: Slot [26] registered Oct 2 19:17:38.016691 kernel: acpiphp: Slot [27] registered Oct 2 19:17:38.016706 kernel: acpiphp: Slot [28] registered Oct 2 19:17:38.016732 kernel: acpiphp: Slot [29] registered Oct 2 19:17:38.016742 kernel: acpiphp: Slot [30] registered Oct 2 19:17:38.016752 kernel: acpiphp: Slot [31] registered Oct 2 19:17:38.016762 kernel: PCI host bridge to bus 0000:00 Oct 2 19:17:38.016890 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:17:38.016974 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:17:38.017087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:17:38.017180 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Oct 2 19:17:38.017262 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Oct 2 19:17:38.017342 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:17:38.017458 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:17:38.017569 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:17:38.017768 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:17:38.017908 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Oct 2 19:17:38.018033 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:17:38.018166 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:17:38.018313 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:17:38.018451 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:17:38.018609 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:17:38.018776 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 2 19:17:38.018967 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Oct 2 19:17:38.019088 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Oct 2 19:17:38.019221 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 2 19:17:38.019315 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Oct 2 19:17:38.019540 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 2 19:17:38.019689 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Oct 2 19:17:38.019854 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:17:38.020125 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:17:38.020295 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Oct 2 19:17:38.020601 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 2 19:17:38.020846 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 2 19:17:38.021088 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:17:38.021292 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:17:38.021413 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 2 19:17:38.021630 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 2 19:17:38.021826 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:17:38.022008 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:17:38.023184 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Oct 2 19:17:38.023442 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 2 19:17:38.023601 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 2 19:17:38.023616 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:17:38.023647 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:17:38.023658 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:17:38.023676 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:17:38.023691 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:17:38.023711 kernel: iommu: Default domain type: Translated Oct 2 19:17:38.023752 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:17:38.023905 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:17:38.024061 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:17:38.024187 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:17:38.024206 kernel: vgaarb: loaded Oct 2 19:17:38.024221 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:17:38.024232 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:17:38.024247 kernel: PTP clock support registered Oct 2 19:17:38.024260 kernel: Registered efivars operations Oct 2 19:17:38.024275 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:17:38.024290 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:17:38.024309 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 2 19:17:38.024328 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 2 19:17:38.024357 kernel: e820: reserve RAM buffer [mem 0x9b3c7018-0x9bffffff] Oct 2 19:17:38.024367 kernel: e820: reserve RAM buffer [mem 0x9b404018-0x9bffffff] Oct 2 19:17:38.024379 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 2 19:17:38.024424 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 2 19:17:38.024434 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 2 19:17:38.024452 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 2 19:17:38.024475 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:17:38.024487 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:17:38.024537 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:17:38.024555 kernel: pnp: PnP ACPI init Oct 2 19:17:38.024977 kernel: pnp 00:02: [dma 2] Oct 2 19:17:38.025001 kernel: pnp: PnP ACPI: found 6 devices Oct 2 19:17:38.025015 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:17:38.025034 kernel: NET: Registered PF_INET protocol family Oct 2 19:17:38.025050 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:17:38.025072 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:17:38.025092 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:17:38.025110 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:17:38.025122 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:17:38.025138 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:17:38.025155 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:17:38.025184 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:17:38.025199 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:17:38.025215 kernel: NET: Registered PF_XDP protocol family Oct 2 19:17:38.025402 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 2 19:17:38.025660 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 2 19:17:38.025807 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:17:38.025932 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:17:38.026054 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:17:38.026186 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Oct 2 19:17:38.026325 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Oct 2 19:17:38.026491 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:17:38.026673 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:17:38.026893 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:17:38.026907 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:17:38.026920 kernel: Initialise system trusted keyrings Oct 2 19:17:38.026933 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:17:38.026942 kernel: Key type asymmetric registered Oct 2 19:17:38.026957 kernel: Asymmetric key parser 'x509' registered Oct 2 19:17:38.026973 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:17:38.026983 kernel: io scheduler mq-deadline registered Oct 2 19:17:38.027002 kernel: io scheduler kyber registered Oct 2 19:17:38.027019 kernel: io scheduler bfq registered Oct 2 19:17:38.027034 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:17:38.027044 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:17:38.027064 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:17:38.027074 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:17:38.027084 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:17:38.027099 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:17:38.027110 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:17:38.027126 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:17:38.027136 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:17:38.027146 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:17:38.027307 kernel: rtc_cmos 00:05: RTC can wake from S4 Oct 2 19:17:38.027441 kernel: rtc_cmos 00:05: registered as rtc0 Oct 2 19:17:38.027638 kernel: rtc_cmos 00:05: setting system clock to 2023-10-02T19:17:37 UTC (1696274257) Oct 2 19:17:38.027774 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 2 19:17:38.027789 kernel: efifb: probing for efifb Oct 2 19:17:38.027799 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 2 19:17:38.027810 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 2 19:17:38.027820 kernel: efifb: scrolling: redraw Oct 2 19:17:38.027830 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 2 19:17:38.027840 kernel: Console: switching to colour frame buffer device 160x50 Oct 2 19:17:38.027850 kernel: fb0: EFI VGA frame buffer device Oct 2 19:17:38.027864 kernel: pstore: Registered efi as persistent store backend Oct 2 19:17:38.027875 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:17:38.027885 kernel: Segment Routing with IPv6 Oct 2 19:17:38.027895 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:17:38.027907 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:17:38.027917 kernel: Key type dns_resolver registered Oct 2 19:17:38.027927 kernel: IPI shorthand broadcast: enabled Oct 2 19:17:38.027938 kernel: sched_clock: Marking stable (439361714, 99420848)->(565075110, -26292548) Oct 2 19:17:38.027961 kernel: registered taskstats version 1 Oct 2 19:17:38.027981 kernel: Loading compiled-in X.509 certificates Oct 2 19:17:38.027994 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:17:38.028006 kernel: Key type .fscrypt registered Oct 2 19:17:38.028017 kernel: Key type fscrypt-provisioning registered Oct 2 19:17:38.028030 kernel: pstore: Using crash dump compression: deflate Oct 2 19:17:38.028040 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:17:38.028056 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:17:38.028067 kernel: ima: No architecture policies found Oct 2 19:17:38.028080 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:17:38.028100 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:17:38.028115 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:17:38.028133 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:17:38.028146 kernel: Run /init as init process Oct 2 19:17:38.028161 kernel: with arguments: Oct 2 19:17:38.028172 kernel: /init Oct 2 19:17:38.028182 kernel: with environment: Oct 2 19:17:38.028192 kernel: HOME=/ Oct 2 19:17:38.028213 kernel: TERM=linux Oct 2 19:17:38.028234 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:17:38.028252 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:17:38.028270 systemd[1]: Detected virtualization kvm. Oct 2 19:17:38.028295 systemd[1]: Detected architecture x86-64. Oct 2 19:17:38.028308 systemd[1]: Running in initrd. Oct 2 19:17:38.028337 systemd[1]: No hostname configured, using default hostname. Oct 2 19:17:38.028362 systemd[1]: Hostname set to . Oct 2 19:17:38.028397 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:17:38.028416 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:17:38.028432 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:17:38.028445 systemd[1]: Reached target cryptsetup.target. Oct 2 19:17:38.028463 systemd[1]: Reached target paths.target. Oct 2 19:17:38.028485 systemd[1]: Reached target slices.target. Oct 2 19:17:38.028499 systemd[1]: Reached target swap.target. Oct 2 19:17:38.028512 systemd[1]: Reached target timers.target. Oct 2 19:17:38.028529 systemd[1]: Listening on iscsid.socket. Oct 2 19:17:38.028540 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:17:38.028560 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:17:38.028572 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:17:38.028583 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:17:38.028597 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:17:38.028608 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:17:38.028619 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:17:38.028638 systemd[1]: Reached target sockets.target. Oct 2 19:17:38.028652 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:17:38.028666 systemd[1]: Finished network-cleanup.service. Oct 2 19:17:38.028677 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:17:38.028688 systemd[1]: Starting systemd-journald.service... Oct 2 19:17:38.028698 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:17:38.028709 systemd[1]: Starting systemd-resolved.service... Oct 2 19:17:38.028733 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:17:38.028747 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:17:38.028758 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:17:38.028772 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:17:38.028783 kernel: audit: type=1130 audit(1696274258.011:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.028795 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:17:38.028806 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:17:38.028820 systemd-journald[198]: Journal started Oct 2 19:17:38.028878 systemd-journald[198]: Runtime Journal (/run/log/journal/6aa5dd07a8ce444bbe75102d2c658146) is 6.0M, max 48.4M, 42.4M free. Oct 2 19:17:38.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.020604 systemd-modules-load[199]: Inserted module 'overlay' Oct 2 19:17:38.030253 systemd[1]: Started systemd-journald.service. Oct 2 19:17:38.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.032058 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:17:38.037012 kernel: audit: type=1130 audit(1696274258.030:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.037045 kernel: audit: type=1130 audit(1696274258.033:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.040885 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:17:38.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.044733 kernel: audit: type=1130 audit(1696274258.041:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.044984 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:17:38.046762 systemd-resolved[200]: Positive Trust Anchors: Oct 2 19:17:38.046786 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:17:38.046815 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:17:38.049351 systemd-resolved[200]: Defaulting to hostname 'linux'. Oct 2 19:17:38.050142 systemd[1]: Started systemd-resolved.service. Oct 2 19:17:38.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.056422 systemd[1]: Reached target nss-lookup.target. Oct 2 19:17:38.059023 kernel: audit: type=1130 audit(1696274258.053:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.059041 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:17:38.061730 kernel: Bridge firewalling registered Oct 2 19:17:38.061762 systemd-modules-load[199]: Inserted module 'br_netfilter' Oct 2 19:17:38.065402 dracut-cmdline[214]: dracut-dracut-053 Oct 2 19:17:38.067692 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:17:38.078745 kernel: SCSI subsystem initialized Oct 2 19:17:38.090752 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:17:38.090786 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:17:38.090801 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:17:38.094649 systemd-modules-load[199]: Inserted module 'dm_multipath' Oct 2 19:17:38.095525 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:17:38.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.099374 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:17:38.100419 kernel: audit: type=1130 audit(1696274258.096:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.107326 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:17:38.111089 kernel: audit: type=1130 audit(1696274258.107:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.136791 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:17:38.147760 kernel: iscsi: registered transport (tcp) Oct 2 19:17:38.169744 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:17:38.169776 kernel: QLogic iSCSI HBA Driver Oct 2 19:17:38.196416 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:17:38.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.199459 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:17:38.200479 kernel: audit: type=1130 audit(1696274258.196:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.249765 kernel: raid6: avx2x4 gen() 27927 MB/s Oct 2 19:17:38.266743 kernel: raid6: avx2x4 xor() 7981 MB/s Oct 2 19:17:38.283751 kernel: raid6: avx2x2 gen() 31378 MB/s Oct 2 19:17:38.300750 kernel: raid6: avx2x2 xor() 12350 MB/s Oct 2 19:17:38.317756 kernel: raid6: avx2x1 gen() 15971 MB/s Oct 2 19:17:38.334754 kernel: raid6: avx2x1 xor() 11204 MB/s Oct 2 19:17:38.351759 kernel: raid6: sse2x4 gen() 14011 MB/s Oct 2 19:17:38.368764 kernel: raid6: sse2x4 xor() 6955 MB/s Oct 2 19:17:38.385748 kernel: raid6: sse2x2 gen() 15892 MB/s Oct 2 19:17:38.402749 kernel: raid6: sse2x2 xor() 9827 MB/s Oct 2 19:17:38.419767 kernel: raid6: sse2x1 gen() 12051 MB/s Oct 2 19:17:38.437170 kernel: raid6: sse2x1 xor() 6944 MB/s Oct 2 19:17:38.437236 kernel: raid6: using algorithm avx2x2 gen() 31378 MB/s Oct 2 19:17:38.437246 kernel: raid6: .... xor() 12350 MB/s, rmw enabled Oct 2 19:17:38.437255 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:17:38.450753 kernel: xor: automatically using best checksumming function avx Oct 2 19:17:38.544777 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:17:38.555876 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:17:38.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.558000 audit: BPF prog-id=7 op=LOAD Oct 2 19:17:38.558000 audit: BPF prog-id=8 op=LOAD Oct 2 19:17:38.559634 systemd[1]: Starting systemd-udevd.service... Oct 2 19:17:38.560740 kernel: audit: type=1130 audit(1696274258.556:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.578442 systemd-udevd[400]: Using default interface naming scheme 'v252'. Oct 2 19:17:38.584365 systemd[1]: Started systemd-udevd.service. Oct 2 19:17:38.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.586856 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:17:38.599490 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Oct 2 19:17:38.633395 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:17:38.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.635085 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:17:38.675115 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:17:38.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:38.702742 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:17:38.717736 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:17:38.722770 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:17:38.739743 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:17:38.739783 kernel: AES CTR mode by8 optimization enabled Oct 2 19:17:38.742756 kernel: libata version 3.00 loaded. Oct 2 19:17:38.745755 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:17:38.754745 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Oct 2 19:17:38.759741 kernel: scsi host0: ata_piix Oct 2 19:17:38.760395 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:17:38.764003 kernel: scsi host1: ata_piix Oct 2 19:17:38.764199 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Oct 2 19:17:38.764235 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Oct 2 19:17:38.766974 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:17:38.768654 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:17:38.773318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:17:38.777819 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:17:38.779859 systemd[1]: Starting disk-uuid.service... Oct 2 19:17:38.786747 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:17:38.791749 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:17:38.924002 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 2 19:17:38.925877 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 2 19:17:38.959874 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 2 19:17:38.960199 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:17:38.977743 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:17:39.794741 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:17:39.795201 disk-uuid[517]: The operation has completed successfully. Oct 2 19:17:39.816828 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:17:39.816964 systemd[1]: Finished disk-uuid.service. Oct 2 19:17:39.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.828945 systemd[1]: Starting verity-setup.service... Oct 2 19:17:39.842753 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:17:39.874480 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:17:39.876483 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:17:39.879883 systemd[1]: Finished verity-setup.service. Oct 2 19:17:39.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:39.982663 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:17:39.983600 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:17:39.983093 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:17:39.983776 systemd[1]: Starting ignition-setup.service... Oct 2 19:17:39.984578 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:17:39.995199 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:17:39.995250 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:17:39.995263 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:17:40.003971 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:17:40.011901 systemd[1]: Finished ignition-setup.service. Oct 2 19:17:40.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.012859 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:17:40.056705 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:17:40.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.057000 audit: BPF prog-id=9 op=LOAD Oct 2 19:17:40.059041 systemd[1]: Starting systemd-networkd.service... Oct 2 19:17:40.081483 systemd-networkd[694]: lo: Link UP Oct 2 19:17:40.081493 systemd-networkd[694]: lo: Gained carrier Oct 2 19:17:40.082082 systemd-networkd[694]: Enumeration completed Oct 2 19:17:40.082161 systemd[1]: Started systemd-networkd.service. Oct 2 19:17:40.082696 systemd-networkd[694]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:17:40.084278 systemd-networkd[694]: eth0: Link UP Oct 2 19:17:40.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.084287 systemd-networkd[694]: eth0: Gained carrier Oct 2 19:17:40.084684 systemd[1]: Reached target network.target. Oct 2 19:17:40.085779 systemd[1]: Starting iscsiuio.service... Oct 2 19:17:40.105306 systemd[1]: Started iscsiuio.service. Oct 2 19:17:40.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.106528 systemd[1]: Starting iscsid.service... Oct 2 19:17:40.111102 iscsid[701]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:17:40.111102 iscsid[701]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:17:40.111102 iscsid[701]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:17:40.111102 iscsid[701]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:17:40.111102 iscsid[701]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:17:40.111102 iscsid[701]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:17:40.119557 systemd[1]: Started iscsid.service. Oct 2 19:17:40.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.120519 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:17:40.120844 systemd-networkd[694]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:17:40.134359 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:17:40.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.134930 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:17:40.135664 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:17:40.136866 systemd[1]: Reached target remote-fs.target. Oct 2 19:17:40.137765 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:17:40.146850 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:17:40.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.245507 ignition[620]: Ignition 2.14.0 Oct 2 19:17:40.245526 ignition[620]: Stage: fetch-offline Oct 2 19:17:40.245614 ignition[620]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:17:40.245638 ignition[620]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:17:40.245790 ignition[620]: parsed url from cmdline: "" Oct 2 19:17:40.245794 ignition[620]: no config URL provided Oct 2 19:17:40.245799 ignition[620]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:17:40.245807 ignition[620]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:17:40.245826 ignition[620]: op(1): [started] loading QEMU firmware config module Oct 2 19:17:40.245830 ignition[620]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:17:40.253048 ignition[620]: op(1): [finished] loading QEMU firmware config module Oct 2 19:17:40.263249 ignition[620]: parsing config with SHA512: 15bce2c8f09388098acfef087d688c17e10ab89eb80a9d8e383fdf5746e6cf6283b3ac343cdbe068e349f89f4a55dac75cd9e2b8a7340a1ca7270fad0332a560 Oct 2 19:17:40.309100 unknown[620]: fetched base config from "system" Oct 2 19:17:40.309113 unknown[620]: fetched user config from "qemu" Oct 2 19:17:40.309514 ignition[620]: fetch-offline: fetch-offline passed Oct 2 19:17:40.310590 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:17:40.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.309580 ignition[620]: Ignition finished successfully Oct 2 19:17:40.311683 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:17:40.312402 systemd[1]: Starting ignition-kargs.service... Oct 2 19:17:40.321704 ignition[717]: Ignition 2.14.0 Oct 2 19:17:40.321713 ignition[717]: Stage: kargs Oct 2 19:17:40.321826 ignition[717]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:17:40.321834 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:17:40.324150 systemd[1]: Finished ignition-kargs.service. Oct 2 19:17:40.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.322741 ignition[717]: kargs: kargs passed Oct 2 19:17:40.322782 ignition[717]: Ignition finished successfully Oct 2 19:17:40.326020 systemd[1]: Starting ignition-disks.service... Oct 2 19:17:40.334302 ignition[723]: Ignition 2.14.0 Oct 2 19:17:40.334311 ignition[723]: Stage: disks Oct 2 19:17:40.334406 ignition[723]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:17:40.334415 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:17:40.335991 systemd[1]: Finished ignition-disks.service. Oct 2 19:17:40.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.335327 ignition[723]: disks: disks passed Oct 2 19:17:40.337198 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:17:40.335372 ignition[723]: Ignition finished successfully Oct 2 19:17:40.338187 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:17:40.339151 systemd[1]: Reached target local-fs.target. Oct 2 19:17:40.340183 systemd[1]: Reached target sysinit.target. Oct 2 19:17:40.340349 systemd[1]: Reached target basic.target. Oct 2 19:17:40.341319 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:17:40.352445 systemd-fsck[731]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:17:40.357959 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:17:40.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.359307 systemd[1]: Mounting sysroot.mount... Oct 2 19:17:40.364662 systemd[1]: Mounted sysroot.mount. Oct 2 19:17:40.365527 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:17:40.365034 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:17:40.367157 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:17:40.367845 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:17:40.367886 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:17:40.367911 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:17:40.369832 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:17:40.371973 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:17:40.376242 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:17:40.380887 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:17:40.383877 initrd-setup-root[757]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:17:40.387212 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:17:40.418269 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:17:40.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.419482 systemd[1]: Starting ignition-mount.service... Oct 2 19:17:40.420598 systemd[1]: Starting sysroot-boot.service... Oct 2 19:17:40.426742 bash[782]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:17:40.436692 ignition[784]: INFO : Ignition 2.14.0 Oct 2 19:17:40.436692 ignition[784]: INFO : Stage: mount Oct 2 19:17:40.438089 ignition[784]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:17:40.438089 ignition[784]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:17:40.438089 ignition[784]: INFO : mount: mount passed Oct 2 19:17:40.438089 ignition[784]: INFO : Ignition finished successfully Oct 2 19:17:40.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:40.439369 systemd[1]: Finished ignition-mount.service. Oct 2 19:17:40.440293 systemd[1]: Finished sysroot-boot.service. Oct 2 19:17:40.890529 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:17:40.900758 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (792) Oct 2 19:17:40.903019 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:17:40.903046 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:17:40.903058 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:17:40.906426 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:17:40.907875 systemd[1]: Starting ignition-files.service... Oct 2 19:17:40.929445 ignition[812]: INFO : Ignition 2.14.0 Oct 2 19:17:40.929445 ignition[812]: INFO : Stage: files Oct 2 19:17:40.931001 ignition[812]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:17:40.931001 ignition[812]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:17:40.931001 ignition[812]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:17:40.933570 ignition[812]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:17:40.933570 ignition[812]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:17:40.935438 ignition[812]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:17:40.935438 ignition[812]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:17:40.935438 ignition[812]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:17:40.935147 unknown[812]: wrote ssh authorized keys file for user: core Oct 2 19:17:40.939087 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:17:40.939087 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 19:17:41.112622 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:17:41.383171 ignition[812]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 19:17:41.383171 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:17:41.386946 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:17:41.386946 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:17:41.486399 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:17:41.660408 ignition[812]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 19:17:41.660408 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:17:41.664904 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:17:41.664904 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:17:41.755233 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:17:41.826051 systemd-networkd[694]: eth0: Gained IPv6LL Oct 2 19:17:42.591111 ignition[812]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Oct 2 19:17:42.593405 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:17:42.593405 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:17:42.593405 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:17:42.661249 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:17:44.334338 ignition[812]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Oct 2 19:17:44.334338 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:17:44.338842 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:17:44.338842 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:17:44.338842 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:17:44.338842 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:17:44.338842 ignition[812]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:17:44.362510 ignition[812]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:17:44.362510 ignition[812]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:17:44.362510 ignition[812]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:17:44.362510 ignition[812]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:17:44.389398 ignition[812]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:17:44.390777 ignition[812]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:17:44.390777 ignition[812]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:17:44.390777 ignition[812]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:17:44.390777 ignition[812]: INFO : files: files passed Oct 2 19:17:44.390777 ignition[812]: INFO : Ignition finished successfully Oct 2 19:17:44.399695 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:17:44.399733 kernel: audit: type=1130 audit(1696274264.391:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.391010 systemd[1]: Finished ignition-files.service. Oct 2 19:17:44.393564 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:17:44.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.397079 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:17:44.407476 kernel: audit: type=1130 audit(1696274264.401:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.407507 kernel: audit: type=1131 audit(1696274264.401:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.407564 initrd-setup-root-after-ignition[838]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:17:44.411699 kernel: audit: type=1130 audit(1696274264.406:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.397921 systemd[1]: Starting ignition-quench.service... Oct 2 19:17:44.412479 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:17:44.400816 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:17:44.400903 systemd[1]: Finished ignition-quench.service. Oct 2 19:17:44.402214 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:17:44.407572 systemd[1]: Reached target ignition-complete.target. Oct 2 19:17:44.410923 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:17:44.422907 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:17:44.422998 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:17:44.428921 kernel: audit: type=1130 audit(1696274264.424:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.428937 kernel: audit: type=1131 audit(1696274264.424:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.424175 systemd[1]: Reached target initrd-fs.target. Oct 2 19:17:44.428932 systemd[1]: Reached target initrd.target. Oct 2 19:17:44.429506 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:17:44.430327 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:17:44.440736 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:17:44.444367 kernel: audit: type=1130 audit(1696274264.440:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.441954 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:17:44.449883 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:17:44.470665 kernel: audit: type=1131 audit(1696274264.450:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.470684 kernel: audit: type=1131 audit(1696274264.454:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.470695 kernel: audit: type=1131 audit(1696274264.457:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.450202 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:17:44.471932 iscsid[701]: iscsid shutting down. Oct 2 19:17:44.450407 systemd[1]: Stopped target timers.target. Oct 2 19:17:44.450614 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:17:44.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.450727 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:17:44.476949 ignition[854]: INFO : Ignition 2.14.0 Oct 2 19:17:44.476949 ignition[854]: INFO : Stage: umount Oct 2 19:17:44.476949 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:17:44.476949 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:17:44.476949 ignition[854]: INFO : umount: umount passed Oct 2 19:17:44.476949 ignition[854]: INFO : Ignition finished successfully Oct 2 19:17:44.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.451011 systemd[1]: Stopped target initrd.target. Oct 2 19:17:44.453216 systemd[1]: Stopped target basic.target. Oct 2 19:17:44.453528 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:17:44.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.453920 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:17:44.454128 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:17:44.454241 systemd[1]: Stopped target remote-fs.target. Oct 2 19:17:44.454342 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:17:44.454459 systemd[1]: Stopped target sysinit.target. Oct 2 19:17:44.454731 systemd[1]: Stopped target local-fs.target. Oct 2 19:17:44.455065 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:17:44.455151 systemd[1]: Stopped target swap.target. Oct 2 19:17:44.455244 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:17:44.455337 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:17:44.455653 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:17:44.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.457926 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:17:44.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.458013 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:17:44.458289 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:17:44.458372 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:17:44.460900 systemd[1]: Stopped target paths.target. Oct 2 19:17:44.461115 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:17:44.498000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:17:44.462770 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:17:44.462985 systemd[1]: Stopped target slices.target. Oct 2 19:17:44.463183 systemd[1]: Stopped target sockets.target. Oct 2 19:17:44.463307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:17:44.463410 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:17:44.463750 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:17:44.463833 systemd[1]: Stopped ignition-files.service. Oct 2 19:17:44.464876 systemd[1]: Stopping ignition-mount.service... Oct 2 19:17:44.465285 systemd[1]: Stopping iscsid.service... Oct 2 19:17:44.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.466267 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:17:44.466465 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:17:44.466574 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:17:44.466984 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:17:44.467060 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:17:44.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.469965 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:17:44.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.470039 systemd[1]: Stopped iscsid.service. Oct 2 19:17:44.471015 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:17:44.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.471081 systemd[1]: Closed iscsid.socket. Oct 2 19:17:44.473480 systemd[1]: Stopping iscsiuio.service... Oct 2 19:17:44.476072 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:17:44.476149 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:17:44.477412 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:17:44.477493 systemd[1]: Stopped ignition-mount.service. Oct 2 19:17:44.478979 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:17:44.479056 systemd[1]: Stopped iscsiuio.service. Oct 2 19:17:44.480857 systemd[1]: Stopped target network.target. Oct 2 19:17:44.481316 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:17:44.481343 systemd[1]: Closed iscsiuio.socket. Oct 2 19:17:44.481539 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:17:44.481569 systemd[1]: Stopped ignition-disks.service. Oct 2 19:17:44.481894 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:17:44.481923 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:17:44.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.482091 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:17:44.482117 systemd[1]: Stopped ignition-setup.service. Oct 2 19:17:44.482732 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:17:44.486010 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:17:44.487710 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:17:44.489090 systemd-networkd[694]: eth0: DHCPv6 lease lost Oct 2 19:17:44.525000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:17:44.491697 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:17:44.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.491811 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:17:44.493057 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:17:44.493094 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:17:44.494507 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:17:44.494588 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:17:44.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.504036 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:17:44.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.504144 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:17:44.504468 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:17:44.504494 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:17:44.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.507312 systemd[1]: Stopping network-cleanup.service... Oct 2 19:17:44.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.507592 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:17:44.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.507645 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:17:44.507986 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:17:44.508031 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:17:44.510914 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:17:44.510964 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:17:44.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:44.511473 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:17:44.517038 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:17:44.521564 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:17:44.521691 systemd[1]: Stopped network-cleanup.service. Oct 2 19:17:44.526554 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:17:44.526664 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:17:44.528533 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:17:44.528569 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:17:44.529797 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:17:44.529825 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:17:44.530182 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:17:44.530213 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:17:44.530425 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:17:44.530466 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:17:44.533205 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:17:44.533260 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:17:44.535070 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:17:44.535969 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:17:44.536029 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:17:44.537809 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:17:44.537844 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:17:44.538184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:17:44.538236 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:17:44.540279 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:17:44.541914 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:17:44.541999 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:17:44.542705 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:17:44.545313 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:17:44.563116 systemd[1]: Switching root. Oct 2 19:17:44.580642 systemd-journald[198]: Journal stopped Oct 2 19:17:48.880431 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Oct 2 19:17:48.880494 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:17:48.880517 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:17:48.880532 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:17:48.880546 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:17:48.880559 kernel: SELinux: policy capability open_perms=1 Oct 2 19:17:48.880594 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:17:48.880612 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:17:48.880625 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:17:48.880641 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:17:48.880654 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:17:48.880667 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:17:48.880682 systemd[1]: Successfully loaded SELinux policy in 39.375ms. Oct 2 19:17:48.880699 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.797ms. Oct 2 19:17:48.880735 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:17:48.880746 systemd[1]: Detected virtualization kvm. Oct 2 19:17:48.880760 systemd[1]: Detected architecture x86-64. Oct 2 19:17:48.880777 systemd[1]: Detected first boot. Oct 2 19:17:48.880792 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:17:48.880814 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:17:48.880829 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:17:48.880845 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:17:48.880861 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:17:48.880877 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:17:48.880891 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:17:48.880909 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:17:48.880925 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:17:48.880949 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:17:48.880963 systemd[1]: Created slice system-getty.slice. Oct 2 19:17:48.880977 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:17:48.880990 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:17:48.881009 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:17:48.881024 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:17:48.881038 systemd[1]: Created slice user.slice. Oct 2 19:17:48.881116 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:17:48.881127 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:17:48.881175 systemd[1]: Set up automount boot.automount. Oct 2 19:17:48.881211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:17:48.881231 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:17:48.881253 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:17:48.881286 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:17:48.881316 systemd[1]: Reached target integritysetup.target. Oct 2 19:17:48.881353 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:17:48.881369 systemd[1]: Reached target remote-fs.target. Oct 2 19:17:48.881389 systemd[1]: Reached target slices.target. Oct 2 19:17:48.881400 systemd[1]: Reached target swap.target. Oct 2 19:17:48.881411 systemd[1]: Reached target torcx.target. Oct 2 19:17:48.881425 systemd[1]: Reached target veritysetup.target. Oct 2 19:17:48.881440 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:17:48.881455 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:17:48.881469 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:17:48.881479 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:17:48.881490 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:17:48.881501 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:17:48.881520 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:17:48.881531 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:17:48.881542 systemd[1]: Mounting media.mount... Oct 2 19:17:48.881552 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:17:48.881562 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:17:48.881573 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:17:48.881584 systemd[1]: Mounting tmp.mount... Oct 2 19:17:48.881595 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:17:48.881605 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:17:48.881622 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:17:48.881633 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:17:48.881644 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:17:48.881655 systemd[1]: Starting modprobe@drm.service... Oct 2 19:17:48.881666 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:17:48.881677 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:17:48.881687 systemd[1]: Starting modprobe@loop.service... Oct 2 19:17:48.881698 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:17:48.881738 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:17:48.881750 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:17:48.881760 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:17:48.881771 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:17:48.881781 systemd[1]: Stopped systemd-journald.service. Oct 2 19:17:48.881791 kernel: loop: module loaded Oct 2 19:17:48.881805 kernel: fuse: init (API version 7.34) Oct 2 19:17:48.881815 systemd[1]: Starting systemd-journald.service... Oct 2 19:17:48.881825 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:17:48.881837 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:17:48.881860 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:17:48.881876 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:17:48.881891 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:17:48.881902 systemd[1]: Stopped verity-setup.service. Oct 2 19:17:48.881913 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:17:48.881924 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:17:48.881935 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:17:48.881945 systemd[1]: Mounted media.mount. Oct 2 19:17:48.881957 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:17:48.882010 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:17:48.882023 systemd[1]: Mounted tmp.mount. Oct 2 19:17:48.882034 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:17:48.882047 systemd-journald[961]: Journal started Oct 2 19:17:48.882093 systemd-journald[961]: Runtime Journal (/run/log/journal/6aa5dd07a8ce444bbe75102d2c658146) is 6.0M, max 48.4M, 42.4M free. Oct 2 19:17:44.657000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:17:45.197000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:17:45.197000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:17:45.197000 audit: BPF prog-id=10 op=LOAD Oct 2 19:17:45.197000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:17:45.197000 audit: BPF prog-id=11 op=LOAD Oct 2 19:17:45.197000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:17:48.773000 audit: BPF prog-id=12 op=LOAD Oct 2 19:17:48.773000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:17:48.773000 audit: BPF prog-id=13 op=LOAD Oct 2 19:17:48.773000 audit: BPF prog-id=14 op=LOAD Oct 2 19:17:48.773000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:17:48.773000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:17:48.774000 audit: BPF prog-id=15 op=LOAD Oct 2 19:17:48.774000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:17:48.774000 audit: BPF prog-id=16 op=LOAD Oct 2 19:17:48.774000 audit: BPF prog-id=17 op=LOAD Oct 2 19:17:48.774000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:17:48.774000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:17:48.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.786000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:17:48.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.856000 audit: BPF prog-id=18 op=LOAD Oct 2 19:17:48.856000 audit: BPF prog-id=19 op=LOAD Oct 2 19:17:48.856000 audit: BPF prog-id=20 op=LOAD Oct 2 19:17:48.856000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:17:48.856000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:17:48.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.878000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:17:48.878000 audit[961]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffea31a7860 a2=4000 a3=7ffea31a78fc items=0 ppid=1 pid=961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:48.878000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:17:45.265306 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:17:48.772262 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:17:45.265672 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:17:48.772276 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:17:45.265689 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:17:48.775785 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:17:48.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:45.265729 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:17:45.265738 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:17:45.265767 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:17:45.265780 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:17:45.265970 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:17:45.266009 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:17:45.266020 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:17:45.266408 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:17:45.266445 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:17:45.266461 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:17:48.884729 systemd[1]: Started systemd-journald.service. Oct 2 19:17:45.266474 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:17:45.266489 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:17:48.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:45.266500 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:17:48.455007 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:48Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:17:48.455366 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:48Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:17:48.455555 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:48Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:17:48.455771 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:48Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:17:48.885501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:17:48.455828 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:48Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:17:48.455898 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:17:48Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:17:48.892060 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:17:48.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.967535 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:17:48.967709 systemd[1]: Finished modprobe@drm.service. Oct 2 19:17:48.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.968609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:17:48.968781 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:17:48.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.969688 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:17:48.969844 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:17:48.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.970817 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:17:48.970985 systemd[1]: Finished modprobe@loop.service. Oct 2 19:17:48.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.971981 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:17:48.972110 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:17:48.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.972949 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:17:48.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.973819 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:17:48.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.974612 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:17:48.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.975421 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:17:48.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.976276 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:17:48.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.977365 systemd[1]: Reached target network-pre.target. Oct 2 19:17:48.979518 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:17:48.981203 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:17:48.981736 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:17:48.982982 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:17:48.985090 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:17:48.985854 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:17:48.986663 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:17:48.987248 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:17:48.993305 systemd-journald[961]: Time spent on flushing to /var/log/journal/6aa5dd07a8ce444bbe75102d2c658146 is 20.452ms for 1152 entries. Oct 2 19:17:48.993305 systemd-journald[961]: System Journal (/var/log/journal/6aa5dd07a8ce444bbe75102d2c658146) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:17:49.059845 systemd-journald[961]: Received client request to flush runtime journal. Oct 2 19:17:49.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:49.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:48.988104 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:17:48.989737 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:17:49.060794 udevadm[989]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:17:48.991352 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:17:48.994574 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:17:48.995390 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:17:49.037139 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:17:49.038149 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:17:49.047177 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:17:49.049159 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:17:49.061855 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:17:49.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:49.109883 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:17:49.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:49.128986 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:17:49.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:49.885991 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:17:49.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:49.889742 kernel: kauditd_printk_skb: 92 callbacks suppressed Oct 2 19:17:49.889845 kernel: audit: type=1130 audit(1696274269.886:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:49.889866 kernel: audit: type=1334 audit(1696274269.889:135): prog-id=21 op=LOAD Oct 2 19:17:49.889000 audit: BPF prog-id=21 op=LOAD Oct 2 19:17:49.889000 audit: BPF prog-id=22 op=LOAD Oct 2 19:17:49.891301 kernel: audit: type=1334 audit(1696274269.889:136): prog-id=22 op=LOAD Oct 2 19:17:49.891400 kernel: audit: type=1334 audit(1696274269.889:137): prog-id=7 op=UNLOAD Oct 2 19:17:49.889000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:17:49.891277 systemd[1]: Starting systemd-udevd.service... Oct 2 19:17:49.892071 kernel: audit: type=1334 audit(1696274269.889:138): prog-id=8 op=UNLOAD Oct 2 19:17:49.889000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:17:49.909736 systemd-udevd[996]: Using default interface naming scheme 'v252'. Oct 2 19:17:49.925369 systemd[1]: Started systemd-udevd.service. Oct 2 19:17:49.931269 kernel: audit: type=1130 audit(1696274269.925:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:49.931373 kernel: audit: type=1334 audit(1696274269.927:140): prog-id=23 op=LOAD Oct 2 19:17:49.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:49.927000 audit: BPF prog-id=23 op=LOAD Oct 2 19:17:49.931147 systemd[1]: Starting systemd-networkd.service... Oct 2 19:17:49.935000 audit: BPF prog-id=24 op=LOAD Oct 2 19:17:49.938546 kernel: audit: type=1334 audit(1696274269.935:141): prog-id=24 op=LOAD Oct 2 19:17:49.938574 kernel: audit: type=1334 audit(1696274269.936:142): prog-id=25 op=LOAD Oct 2 19:17:49.938594 kernel: audit: type=1334 audit(1696274269.937:143): prog-id=26 op=LOAD Oct 2 19:17:49.936000 audit: BPF prog-id=25 op=LOAD Oct 2 19:17:49.937000 audit: BPF prog-id=26 op=LOAD Oct 2 19:17:49.938564 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:17:49.953237 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:17:49.972823 systemd[1]: Started systemd-userdbd.service. Oct 2 19:17:49.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:49.986500 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:17:50.003755 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:17:50.018068 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:17:50.015000 audit[1012]: AVC avc: denied { confidentiality } for pid=1012 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:17:50.035607 systemd-networkd[1007]: lo: Link UP Oct 2 19:17:50.035619 systemd-networkd[1007]: lo: Gained carrier Oct 2 19:17:50.036117 systemd-networkd[1007]: Enumeration completed Oct 2 19:17:50.036221 systemd[1]: Started systemd-networkd.service. Oct 2 19:17:50.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.036239 systemd-networkd[1007]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:17:50.054885 systemd-networkd[1007]: eth0: Link UP Oct 2 19:17:50.054890 systemd-networkd[1007]: eth0: Gained carrier Oct 2 19:17:50.015000 audit[1012]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5575651592e0 a1=32194 a2=7f8cb51cbbc5 a3=5 items=106 ppid=996 pid=1012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:50.015000 audit: CWD cwd="/" Oct 2 19:17:50.015000 audit: PATH item=0 name=(null) inode=11867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=1 name=(null) inode=11868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=2 name=(null) inode=11867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=3 name=(null) inode=11869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=4 name=(null) inode=11867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=5 name=(null) inode=11870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=6 name=(null) inode=11870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=7 name=(null) inode=11871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=8 name=(null) inode=11870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=9 name=(null) inode=11872 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=10 name=(null) inode=11870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=11 name=(null) inode=11873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=12 name=(null) inode=11870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=13 name=(null) inode=11874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=14 name=(null) inode=11870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=15 name=(null) inode=11875 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=16 name=(null) inode=11867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=17 name=(null) inode=11876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=18 name=(null) inode=11876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=19 name=(null) inode=11877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=20 name=(null) inode=11876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=21 name=(null) inode=11878 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=22 name=(null) inode=11876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=23 name=(null) inode=11879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=24 name=(null) inode=11876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=25 name=(null) inode=11880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=26 name=(null) inode=11876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=27 name=(null) inode=11881 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=28 name=(null) inode=11867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=29 name=(null) inode=11882 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=30 name=(null) inode=11882 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=31 name=(null) inode=11883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=32 name=(null) inode=11882 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=33 name=(null) inode=11884 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=34 name=(null) inode=11882 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=35 name=(null) inode=11885 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=36 name=(null) inode=11882 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=37 name=(null) inode=11886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=38 name=(null) inode=11882 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=39 name=(null) inode=11887 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=40 name=(null) inode=11867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=41 name=(null) inode=11888 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=42 name=(null) inode=11888 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=43 name=(null) inode=11889 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=44 name=(null) inode=11888 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=45 name=(null) inode=11890 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=46 name=(null) inode=11888 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=47 name=(null) inode=11891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=48 name=(null) inode=11888 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=49 name=(null) inode=11892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=50 name=(null) inode=11888 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=51 name=(null) inode=11893 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=52 name=(null) inode=1040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=53 name=(null) inode=11894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=54 name=(null) inode=11894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=55 name=(null) inode=11895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=56 name=(null) inode=11894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=57 name=(null) inode=11896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=58 name=(null) inode=11894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=59 name=(null) inode=11897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=60 name=(null) inode=11897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=61 name=(null) inode=11898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=62 name=(null) inode=11897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=63 name=(null) inode=11899 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=64 name=(null) inode=11897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=65 name=(null) inode=11900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=66 name=(null) inode=11897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=67 name=(null) inode=11901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=68 name=(null) inode=11897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=69 name=(null) inode=11902 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=70 name=(null) inode=11894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=71 name=(null) inode=11903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=72 name=(null) inode=11903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=73 name=(null) inode=11904 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=74 name=(null) inode=11903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=75 name=(null) inode=11905 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=76 name=(null) inode=11903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=77 name=(null) inode=11906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=78 name=(null) inode=11903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=79 name=(null) inode=11907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=80 name=(null) inode=11903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=81 name=(null) inode=11908 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=82 name=(null) inode=11894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=83 name=(null) inode=11909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=84 name=(null) inode=11909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=85 name=(null) inode=11910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=86 name=(null) inode=11909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=87 name=(null) inode=11911 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=88 name=(null) inode=11909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=89 name=(null) inode=11912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=90 name=(null) inode=11909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=91 name=(null) inode=11913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=92 name=(null) inode=11909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=93 name=(null) inode=11914 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=94 name=(null) inode=11894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=95 name=(null) inode=11915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=96 name=(null) inode=11915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=97 name=(null) inode=11916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=98 name=(null) inode=11915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=99 name=(null) inode=11917 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=100 name=(null) inode=11915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=101 name=(null) inode=11918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=102 name=(null) inode=11915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=103 name=(null) inode=11919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=104 name=(null) inode=11915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PATH item=105 name=(null) inode=11920 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:17:50.015000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:17:50.061743 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Oct 2 19:17:50.069873 systemd-networkd[1007]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:17:50.075736 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:17:50.079736 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:17:50.172109 kernel: kvm: Nested Virtualization enabled Oct 2 19:17:50.172421 kernel: SVM: kvm: Nested Paging enabled Oct 2 19:17:50.186738 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:17:50.211118 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:17:50.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.212807 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:17:50.237936 lvm[1033]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:17:50.262522 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:17:50.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.263287 systemd[1]: Reached target cryptsetup.target. Oct 2 19:17:50.264858 systemd[1]: Starting lvm2-activation.service... Oct 2 19:17:50.268068 lvm[1034]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:17:50.293944 systemd[1]: Finished lvm2-activation.service. Oct 2 19:17:50.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.294731 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:17:50.295326 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:17:50.295349 systemd[1]: Reached target local-fs.target. Oct 2 19:17:50.295937 systemd[1]: Reached target machines.target. Oct 2 19:17:50.297870 systemd[1]: Starting ldconfig.service... Oct 2 19:17:50.298909 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:17:50.298974 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:17:50.300018 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:17:50.301803 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:17:50.304093 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:17:50.305668 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:17:50.305711 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:17:50.307214 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:17:50.308516 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1036 (bootctl) Oct 2 19:17:50.309737 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:17:50.313057 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:17:50.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.321191 systemd-tmpfiles[1039]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:17:50.322156 systemd-tmpfiles[1039]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:17:50.323804 systemd-tmpfiles[1039]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:17:50.555848 systemd-fsck[1044]: fsck.fat 4.2 (2021-01-31) Oct 2 19:17:50.555848 systemd-fsck[1044]: /dev/vda1: 790 files, 115092/258078 clusters Oct 2 19:17:50.557339 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:17:50.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.559527 systemd[1]: Mounting boot.mount... Oct 2 19:17:50.625283 systemd[1]: Mounted boot.mount. Oct 2 19:17:50.647350 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:17:50.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.726561 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:17:50.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.728886 systemd[1]: Starting audit-rules.service... Oct 2 19:17:50.730680 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:17:50.732684 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:17:50.733000 audit: BPF prog-id=27 op=LOAD Oct 2 19:17:50.735147 systemd[1]: Starting systemd-resolved.service... Oct 2 19:17:50.735000 audit: BPF prog-id=28 op=LOAD Oct 2 19:17:50.742000 audit[1056]: SYSTEM_BOOT pid=1056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.737278 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:17:50.738843 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:17:50.742061 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:17:50.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.774248 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:17:50.780944 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:17:50.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.842475 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:17:50.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:50.843710 systemd[1]: Reached target time-set.target. Oct 2 19:17:51.247578 systemd-timesyncd[1054]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:17:51.247623 systemd-timesyncd[1054]: Initial clock synchronization to Mon 2023-10-02 19:17:51.247507 UTC. Oct 2 19:17:51.250985 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:17:51.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:51.253837 systemd-resolved[1051]: Positive Trust Anchors: Oct 2 19:17:51.253849 systemd-resolved[1051]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:17:51.253876 systemd-resolved[1051]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:17:51.255000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:17:51.255000 audit[1069]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0747fe10 a2=420 a3=0 items=0 ppid=1047 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:51.255000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:17:51.257395 augenrules[1069]: No rules Oct 2 19:17:51.257809 systemd[1]: Finished audit-rules.service. Oct 2 19:17:51.294892 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:17:51.295527 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:17:51.295806 systemd-resolved[1051]: Defaulting to hostname 'linux'. Oct 2 19:17:51.297322 systemd[1]: Started systemd-resolved.service. Oct 2 19:17:51.297932 systemd[1]: Reached target network.target. Oct 2 19:17:51.298650 systemd[1]: Reached target nss-lookup.target. Oct 2 19:17:51.429256 ldconfig[1035]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:17:51.539066 systemd[1]: Finished ldconfig.service. Oct 2 19:17:51.541179 systemd[1]: Starting systemd-update-done.service... Oct 2 19:17:51.548687 systemd[1]: Finished systemd-update-done.service. Oct 2 19:17:51.549456 systemd[1]: Reached target sysinit.target. Oct 2 19:17:51.550089 systemd[1]: Started motdgen.path. Oct 2 19:17:51.550680 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:17:51.551687 systemd[1]: Started logrotate.timer. Oct 2 19:17:51.552360 systemd[1]: Started mdadm.timer. Oct 2 19:17:51.552879 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:17:51.553625 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:17:51.553658 systemd[1]: Reached target paths.target. Oct 2 19:17:51.554286 systemd[1]: Reached target timers.target. Oct 2 19:17:51.555451 systemd[1]: Listening on dbus.socket. Oct 2 19:17:51.557807 systemd[1]: Starting docker.socket... Oct 2 19:17:51.561755 systemd[1]: Listening on sshd.socket. Oct 2 19:17:51.562473 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:17:51.562834 systemd[1]: Listening on docker.socket. Oct 2 19:17:51.563646 systemd[1]: Reached target sockets.target. Oct 2 19:17:51.564289 systemd[1]: Reached target basic.target. Oct 2 19:17:51.565024 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:17:51.565049 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:17:51.566340 systemd[1]: Starting containerd.service... Oct 2 19:17:51.568565 systemd[1]: Starting dbus.service... Oct 2 19:17:51.571084 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:17:51.573585 systemd[1]: Starting extend-filesystems.service... Oct 2 19:17:51.574385 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:17:51.579262 jq[1079]: false Oct 2 19:17:51.610715 systemd[1]: Starting motdgen.service... Oct 2 19:17:51.612926 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:17:51.614666 systemd[1]: Starting prepare-critools.service... Oct 2 19:17:51.616275 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:17:51.617871 systemd[1]: Starting sshd-keygen.service... Oct 2 19:17:51.621910 extend-filesystems[1080]: Found sr0 Oct 2 19:17:51.621910 extend-filesystems[1080]: Found vda Oct 2 19:17:51.621910 extend-filesystems[1080]: Found vda1 Oct 2 19:17:51.621910 extend-filesystems[1080]: Found vda2 Oct 2 19:17:51.621910 extend-filesystems[1080]: Found vda3 Oct 2 19:17:51.621910 extend-filesystems[1080]: Found usr Oct 2 19:17:51.621910 extend-filesystems[1080]: Found vda4 Oct 2 19:17:51.621910 extend-filesystems[1080]: Found vda6 Oct 2 19:17:51.621910 extend-filesystems[1080]: Found vda7 Oct 2 19:17:51.621910 extend-filesystems[1080]: Found vda9 Oct 2 19:17:51.621910 extend-filesystems[1080]: Checking size of /dev/vda9 Oct 2 19:17:51.622827 systemd[1]: Starting systemd-logind.service... Oct 2 19:17:51.624675 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:17:51.624763 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:17:51.625785 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:17:51.626500 systemd[1]: Starting update-engine.service... Oct 2 19:17:51.631475 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:17:51.634870 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:17:51.635090 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:17:51.639782 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:17:51.639963 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:17:51.645539 jq[1095]: true Oct 2 19:17:51.650627 tar[1104]: crictl Oct 2 19:17:51.650815 tar[1102]: ./ Oct 2 19:17:51.650815 tar[1102]: ./loopback Oct 2 19:17:51.653530 extend-filesystems[1080]: Old size kept for /dev/vda9 Oct 2 19:17:51.656490 dbus-daemon[1078]: [system] SELinux support is enabled Oct 2 19:17:51.654635 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:17:51.654919 systemd[1]: Finished extend-filesystems.service. Oct 2 19:17:51.658266 systemd[1]: Started dbus.service. Oct 2 19:17:51.661115 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:17:51.661172 systemd[1]: Reached target system-config.target. Oct 2 19:17:51.661831 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:17:51.661852 systemd[1]: Reached target user-config.target. Oct 2 19:17:51.692467 jq[1108]: true Oct 2 19:17:51.697867 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:17:51.698088 systemd[1]: Finished motdgen.service. Oct 2 19:17:51.698952 systemd-logind[1089]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:17:51.698973 systemd-logind[1089]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:17:51.699253 systemd-logind[1089]: New seat seat0. Oct 2 19:17:51.700721 systemd[1]: Started systemd-logind.service. Oct 2 19:17:51.734177 tar[1102]: ./bandwidth Oct 2 19:17:51.744891 env[1105]: time="2023-10-02T19:17:51.744824142Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:17:51.763390 env[1105]: time="2023-10-02T19:17:51.763281252Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:17:51.763808 env[1105]: time="2023-10-02T19:17:51.763785818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:51.765598 env[1105]: time="2023-10-02T19:17:51.765569033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:17:51.765700 env[1105]: time="2023-10-02T19:17:51.765679941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:51.766064 env[1105]: time="2023-10-02T19:17:51.766036450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:17:51.766231 env[1105]: time="2023-10-02T19:17:51.766182744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:51.766379 env[1105]: time="2023-10-02T19:17:51.766344578Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:17:51.766490 env[1105]: time="2023-10-02T19:17:51.766445948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:51.766709 env[1105]: time="2023-10-02T19:17:51.766682852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:51.767222 env[1105]: time="2023-10-02T19:17:51.767189783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:17:51.767505 env[1105]: time="2023-10-02T19:17:51.767481380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:17:51.767639 env[1105]: time="2023-10-02T19:17:51.767612516Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:17:51.767793 env[1105]: time="2023-10-02T19:17:51.767761836Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:17:51.767899 env[1105]: time="2023-10-02T19:17:51.767872293Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:17:51.816051 systemd[1]: Created slice system-sshd.slice. Oct 2 19:17:51.830954 tar[1102]: ./ptp Oct 2 19:17:51.877347 tar[1102]: ./vlan Oct 2 19:17:51.924454 tar[1102]: ./host-device Oct 2 19:17:51.934792 update_engine[1091]: I1002 19:17:51.934147 1091 main.cc:92] Flatcar Update Engine starting Oct 2 19:17:51.937975 update_engine[1091]: I1002 19:17:51.937885 1091 update_check_scheduler.cc:74] Next update check in 11m12s Oct 2 19:17:51.937939 systemd[1]: Started update-engine.service. Oct 2 19:17:51.941569 systemd[1]: Started locksmithd.service. Oct 2 19:17:51.957357 systemd-networkd[1007]: eth0: Gained IPv6LL Oct 2 19:17:51.962298 tar[1102]: ./tuning Oct 2 19:17:51.994209 bash[1131]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:17:51.994543 env[1105]: time="2023-10-02T19:17:51.994463023Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:17:51.994543 env[1105]: time="2023-10-02T19:17:51.994524167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:17:51.994591 env[1105]: time="2023-10-02T19:17:51.994554995Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:17:51.994660 env[1105]: time="2023-10-02T19:17:51.994614236Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:17:51.994724 env[1105]: time="2023-10-02T19:17:51.994694898Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:17:51.994724 env[1105]: time="2023-10-02T19:17:51.994712701Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:17:51.994772 env[1105]: time="2023-10-02T19:17:51.994727258Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:17:51.994772 env[1105]: time="2023-10-02T19:17:51.994743098Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:17:51.994772 env[1105]: time="2023-10-02T19:17:51.994765490Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:17:51.994825 env[1105]: time="2023-10-02T19:17:51.994783143Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:17:51.994825 env[1105]: time="2023-10-02T19:17:51.994805856Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:17:51.994867 env[1105]: time="2023-10-02T19:17:51.994826094Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:17:51.995582 env[1105]: time="2023-10-02T19:17:51.995560201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:17:51.995709 env[1105]: time="2023-10-02T19:17:51.995686768Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:17:51.995753 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:17:51.996053 env[1105]: time="2023-10-02T19:17:51.996028509Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:17:51.996105 env[1105]: time="2023-10-02T19:17:51.996070468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996105 env[1105]: time="2023-10-02T19:17:51.996086949Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:17:51.996193 env[1105]: time="2023-10-02T19:17:51.996173451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996248 env[1105]: time="2023-10-02T19:17:51.996193769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996248 env[1105]: time="2023-10-02T19:17:51.996225569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996286 env[1105]: time="2023-10-02T19:17:51.996252780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996286 env[1105]: time="2023-10-02T19:17:51.996268710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996329 env[1105]: time="2023-10-02T19:17:51.996284259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996329 env[1105]: time="2023-10-02T19:17:51.996297734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996329 env[1105]: time="2023-10-02T19:17:51.996310769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996391 env[1105]: time="2023-10-02T19:17:51.996328923Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:17:51.996494 env[1105]: time="2023-10-02T19:17:51.996462844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996494 env[1105]: time="2023-10-02T19:17:51.996488792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996562 env[1105]: time="2023-10-02T19:17:51.996503059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996562 env[1105]: time="2023-10-02T19:17:51.996520893Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:17:51.996562 env[1105]: time="2023-10-02T19:17:51.996544988Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:17:51.996562 env[1105]: time="2023-10-02T19:17:51.996558052Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:17:51.996638 env[1105]: time="2023-10-02T19:17:51.996592477Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:17:51.996638 env[1105]: time="2023-10-02T19:17:51.996633895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:17:51.996971 env[1105]: time="2023-10-02T19:17:51.996893492Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:17:52.000178 env[1105]: time="2023-10-02T19:17:51.996983711Z" level=info msg="Connect containerd service" Oct 2 19:17:52.000178 env[1105]: time="2023-10-02T19:17:51.997039015Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:17:52.000178 env[1105]: time="2023-10-02T19:17:51.997726053Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:17:52.000178 env[1105]: time="2023-10-02T19:17:51.998020015Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:17:52.000178 env[1105]: time="2023-10-02T19:17:51.998058767Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:17:52.000178 env[1105]: time="2023-10-02T19:17:51.998113560Z" level=info msg="containerd successfully booted in 0.256839s" Oct 2 19:17:52.000349 tar[1102]: ./vrf Oct 2 19:17:51.998179 systemd[1]: Started containerd.service. Oct 2 19:17:52.000689 env[1105]: time="2023-10-02T19:17:52.000541053Z" level=info msg="Start subscribing containerd event" Oct 2 19:17:52.000689 env[1105]: time="2023-10-02T19:17:52.000603260Z" level=info msg="Start recovering state" Oct 2 19:17:52.002009 env[1105]: time="2023-10-02T19:17:52.000892362Z" level=info msg="Start event monitor" Oct 2 19:17:52.002009 env[1105]: time="2023-10-02T19:17:52.000934501Z" level=info msg="Start snapshots syncer" Oct 2 19:17:52.002009 env[1105]: time="2023-10-02T19:17:52.000944981Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:17:52.002009 env[1105]: time="2023-10-02T19:17:52.000955310Z" level=info msg="Start streaming server" Oct 2 19:17:52.047821 tar[1102]: ./sbr Oct 2 19:17:52.081907 tar[1102]: ./tap Oct 2 19:17:52.117870 locksmithd[1138]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:17:52.120351 tar[1102]: ./dhcp Oct 2 19:17:52.195976 systemd[1]: Finished prepare-critools.service. Oct 2 19:17:52.214762 tar[1102]: ./static Oct 2 19:17:52.235893 tar[1102]: ./firewall Oct 2 19:17:52.268975 tar[1102]: ./macvlan Oct 2 19:17:52.299887 tar[1102]: ./dummy Oct 2 19:17:52.330483 tar[1102]: ./bridge Oct 2 19:17:52.362193 tar[1102]: ./ipvlan Oct 2 19:17:52.391852 tar[1102]: ./portmap Oct 2 19:17:52.422653 tar[1102]: ./host-local Oct 2 19:17:52.456129 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:17:52.649160 sshd_keygen[1103]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:17:52.668019 systemd[1]: Finished sshd-keygen.service. Oct 2 19:17:52.670534 systemd[1]: Starting issuegen.service... Oct 2 19:17:52.672319 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:37024.service. Oct 2 19:17:52.675672 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:17:52.675825 systemd[1]: Finished issuegen.service. Oct 2 19:17:52.677937 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:17:52.684060 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:17:52.686433 systemd[1]: Started getty@tty1.service. Oct 2 19:17:52.688562 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:17:52.725778 systemd[1]: Reached target getty.target. Oct 2 19:17:52.726408 systemd[1]: Reached target multi-user.target. Oct 2 19:17:52.728405 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:17:52.734634 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:17:52.734812 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:17:52.735817 systemd[1]: Startup finished in 778ms (kernel) + 6.755s (initrd) + 7.722s (userspace) = 15.256s. Oct 2 19:17:52.765114 sshd[1154]: Accepted publickey for core from 10.0.0.1 port 37024 ssh2: RSA SHA256:9wRqOmzBU7I1L73Sd3XbDPVeoZziQ4I3fHnP0PJ8idM Oct 2 19:17:52.766679 sshd[1154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:17:52.777289 systemd-logind[1089]: New session 1 of user core. Oct 2 19:17:52.778465 systemd[1]: Created slice user-500.slice. Oct 2 19:17:52.779808 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:17:52.787425 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:17:52.788906 systemd[1]: Starting user@500.service... Oct 2 19:17:52.791301 (systemd)[1163]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:17:52.859590 systemd[1163]: Queued start job for default target default.target. Oct 2 19:17:52.860075 systemd[1163]: Reached target paths.target. Oct 2 19:17:52.860098 systemd[1163]: Reached target sockets.target. Oct 2 19:17:52.860115 systemd[1163]: Reached target timers.target. Oct 2 19:17:52.860130 systemd[1163]: Reached target basic.target. Oct 2 19:17:52.860178 systemd[1163]: Reached target default.target. Oct 2 19:17:52.860208 systemd[1163]: Startup finished in 63ms. Oct 2 19:17:52.860385 systemd[1]: Started user@500.service. Oct 2 19:17:52.861561 systemd[1]: Started session-1.scope. Oct 2 19:17:52.915496 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:37036.service. Oct 2 19:17:52.958692 sshd[1172]: Accepted publickey for core from 10.0.0.1 port 37036 ssh2: RSA SHA256:9wRqOmzBU7I1L73Sd3XbDPVeoZziQ4I3fHnP0PJ8idM Oct 2 19:17:52.960190 sshd[1172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:17:52.964536 systemd-logind[1089]: New session 2 of user core. Oct 2 19:17:52.965469 systemd[1]: Started session-2.scope. Oct 2 19:17:53.025521 sshd[1172]: pam_unix(sshd:session): session closed for user core Oct 2 19:17:53.028683 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:37036.service: Deactivated successfully. Oct 2 19:17:53.029311 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:17:53.030027 systemd-logind[1089]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:17:53.032031 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:37044.service. Oct 2 19:17:53.032898 systemd-logind[1089]: Removed session 2. Oct 2 19:17:53.075203 sshd[1178]: Accepted publickey for core from 10.0.0.1 port 37044 ssh2: RSA SHA256:9wRqOmzBU7I1L73Sd3XbDPVeoZziQ4I3fHnP0PJ8idM Oct 2 19:17:53.076369 sshd[1178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:17:53.080022 systemd-logind[1089]: New session 3 of user core. Oct 2 19:17:53.080841 systemd[1]: Started session-3.scope. Oct 2 19:17:53.130972 sshd[1178]: pam_unix(sshd:session): session closed for user core Oct 2 19:17:53.133925 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:37044.service: Deactivated successfully. Oct 2 19:17:53.134469 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:17:53.134966 systemd-logind[1089]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:17:53.136208 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:37058.service. Oct 2 19:17:53.136858 systemd-logind[1089]: Removed session 3. Oct 2 19:17:53.175122 sshd[1184]: Accepted publickey for core from 10.0.0.1 port 37058 ssh2: RSA SHA256:9wRqOmzBU7I1L73Sd3XbDPVeoZziQ4I3fHnP0PJ8idM Oct 2 19:17:53.176402 sshd[1184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:17:53.179996 systemd-logind[1089]: New session 4 of user core. Oct 2 19:17:53.180877 systemd[1]: Started session-4.scope. Oct 2 19:17:53.235224 sshd[1184]: pam_unix(sshd:session): session closed for user core Oct 2 19:17:53.237915 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:37058.service: Deactivated successfully. Oct 2 19:17:53.238403 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:17:53.238882 systemd-logind[1089]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:17:53.239939 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:37060.service. Oct 2 19:17:53.240623 systemd-logind[1089]: Removed session 4. Oct 2 19:17:53.279599 sshd[1190]: Accepted publickey for core from 10.0.0.1 port 37060 ssh2: RSA SHA256:9wRqOmzBU7I1L73Sd3XbDPVeoZziQ4I3fHnP0PJ8idM Oct 2 19:17:53.280996 sshd[1190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:17:53.284492 systemd-logind[1089]: New session 5 of user core. Oct 2 19:17:53.285445 systemd[1]: Started session-5.scope. Oct 2 19:17:53.343545 sudo[1193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:17:53.343708 sudo[1193]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:17:53.351390 dbus-daemon[1078]: \xd0\u001d\xcf8LV: received setenforce notice (enforcing=-1064709936) Oct 2 19:17:53.353269 sudo[1193]: pam_unix(sudo:session): session closed for user root Oct 2 19:17:53.355438 sshd[1190]: pam_unix(sshd:session): session closed for user core Oct 2 19:17:53.358402 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:37060.service: Deactivated successfully. Oct 2 19:17:53.359060 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:17:53.359637 systemd-logind[1089]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:17:53.360589 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:37074.service. Oct 2 19:17:53.361425 systemd-logind[1089]: Removed session 5. Oct 2 19:17:53.402655 sshd[1197]: Accepted publickey for core from 10.0.0.1 port 37074 ssh2: RSA SHA256:9wRqOmzBU7I1L73Sd3XbDPVeoZziQ4I3fHnP0PJ8idM Oct 2 19:17:53.403759 sshd[1197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:17:53.407146 systemd-logind[1089]: New session 6 of user core. Oct 2 19:17:53.408083 systemd[1]: Started session-6.scope. Oct 2 19:17:53.464479 sudo[1201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:17:53.464792 sudo[1201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:17:53.469367 sudo[1201]: pam_unix(sudo:session): session closed for user root Oct 2 19:17:53.475733 sudo[1200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:17:53.475938 sudo[1200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:17:53.484978 systemd[1]: Stopping audit-rules.service... Oct 2 19:17:53.484000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:17:53.484000 audit[1204]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe33b98a70 a2=420 a3=0 items=0 ppid=1 pid=1204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:53.484000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:17:53.486182 auditctl[1204]: No rules Oct 2 19:17:53.486287 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:17:53.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:53.486408 systemd[1]: Stopped audit-rules.service. Oct 2 19:17:53.487567 systemd[1]: Starting audit-rules.service... Oct 2 19:17:53.501647 augenrules[1221]: No rules Oct 2 19:17:53.502155 systemd[1]: Finished audit-rules.service. Oct 2 19:17:53.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:53.502960 sudo[1200]: pam_unix(sudo:session): session closed for user root Oct 2 19:17:53.501000 audit[1200]: USER_END pid=1200 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:17:53.501000 audit[1200]: CRED_DISP pid=1200 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:17:53.504270 sshd[1197]: pam_unix(sshd:session): session closed for user core Oct 2 19:17:53.504000 audit[1197]: USER_END pid=1197 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:17:53.504000 audit[1197]: CRED_DISP pid=1197 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:17:53.506932 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:37074.service: Deactivated successfully. Oct 2 19:17:53.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.149:22-10.0.0.1:37074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:53.507584 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:17:53.508038 systemd-logind[1089]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:17:53.509319 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:37088.service. Oct 2 19:17:53.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:37088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:53.510050 systemd-logind[1089]: Removed session 6. Oct 2 19:17:53.546000 audit[1227]: USER_ACCT pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:17:53.548362 sshd[1227]: Accepted publickey for core from 10.0.0.1 port 37088 ssh2: RSA SHA256:9wRqOmzBU7I1L73Sd3XbDPVeoZziQ4I3fHnP0PJ8idM Oct 2 19:17:53.547000 audit[1227]: CRED_ACQ pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:17:53.547000 audit[1227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc90c83c70 a2=3 a3=0 items=0 ppid=1 pid=1227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:53.547000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:17:53.549488 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:17:53.553108 systemd-logind[1089]: New session 7 of user core. Oct 2 19:17:53.553959 systemd[1]: Started session-7.scope. Oct 2 19:17:53.556000 audit[1227]: USER_START pid=1227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:17:53.557000 audit[1229]: CRED_ACQ pid=1229 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:17:53.605000 audit[1230]: USER_ACCT pid=1230 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:17:53.605000 audit[1230]: CRED_REFR pid=1230 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:17:53.607040 sudo[1230]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:17:53.607246 sudo[1230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:17:53.607000 audit[1230]: USER_START pid=1230 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:17:54.115868 systemd[1]: Reloading. Oct 2 19:17:54.172335 /usr/lib/systemd/system-generators/torcx-generator[1260]: time="2023-10-02T19:17:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:17:54.172363 /usr/lib/systemd/system-generators/torcx-generator[1260]: time="2023-10-02T19:17:54Z" level=info msg="torcx already run" Oct 2 19:17:54.239432 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:17:54.239454 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:17:54.259498 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:17:54.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit: BPF prog-id=34 op=LOAD Oct 2 19:17:54.330000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit: BPF prog-id=35 op=LOAD Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit: BPF prog-id=36 op=LOAD Oct 2 19:17:54.330000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:17:54.330000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit: BPF prog-id=37 op=LOAD Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.332000 audit: BPF prog-id=38 op=LOAD Oct 2 19:17:54.332000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:17:54.332000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.334000 audit: BPF prog-id=39 op=LOAD Oct 2 19:17:54.335000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit: BPF prog-id=40 op=LOAD Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit: BPF prog-id=41 op=LOAD Oct 2 19:17:54.335000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:17:54.335000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.335000 audit: BPF prog-id=42 op=LOAD Oct 2 19:17:54.335000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.337000 audit: BPF prog-id=43 op=LOAD Oct 2 19:17:54.337000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit: BPF prog-id=44 op=LOAD Oct 2 19:17:54.338000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit: BPF prog-id=45 op=LOAD Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.338000 audit: BPF prog-id=46 op=LOAD Oct 2 19:17:54.338000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:17:54.338000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.339000 audit: BPF prog-id=47 op=LOAD Oct 2 19:17:54.339000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:54.340000 audit: BPF prog-id=48 op=LOAD Oct 2 19:17:54.340000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:17:54.352655 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:17:54.839144 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:17:54.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:54.839734 systemd[1]: Reached target network-online.target. Oct 2 19:17:54.841285 systemd[1]: Started kubelet.service. Oct 2 19:17:54.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:54.851064 systemd[1]: Starting coreos-metadata.service... Oct 2 19:17:54.860434 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:17:54.860602 systemd[1]: Finished coreos-metadata.service. Oct 2 19:17:54.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:54.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:54.896612 kubelet[1301]: E1002 19:17:54.896517 1301 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:17:54.898804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:17:54.898953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:17:54.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:17:55.179702 systemd[1]: Stopped kubelet.service. Oct 2 19:17:55.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:55.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:55.193903 systemd[1]: Reloading. Oct 2 19:17:55.255311 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2023-10-02T19:17:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:17:55.255343 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2023-10-02T19:17:55Z" level=info msg="torcx already run" Oct 2 19:17:55.315375 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:17:55.315389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:17:55.333546 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:17:55.383000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.387633 kernel: kauditd_printk_skb: 329 callbacks suppressed Oct 2 19:17:55.387719 kernel: audit: type=1400 audit(1696274275.383:358): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.387744 kernel: audit: type=1400 audit(1696274275.383:359): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.383000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.389525 kernel: audit: type=1400 audit(1696274275.384:360): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.384000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.391372 kernel: audit: type=1400 audit(1696274275.384:361): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.393232 kernel: audit: type=1400 audit(1696274275.384:362): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395098 kernel: audit: type=1400 audit(1696274275.384:363): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.396952 kernel: audit: type=1400 audit(1696274275.384:364): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.398837 kernel: audit: type=1400 audit(1696274275.384:365): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.384000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.402518 kernel: audit: type=1400 audit(1696274275.384:366): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.402559 kernel: audit: type=1400 audit(1696274275.386:367): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.386000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.386000 audit: BPF prog-id=49 op=LOAD Oct 2 19:17:55.386000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:17:55.386000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.386000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.386000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.388000 audit: BPF prog-id=50 op=LOAD Oct 2 19:17:55.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.392000 audit: BPF prog-id=51 op=LOAD Oct 2 19:17:55.392000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:17:55.392000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:17:55.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395000 audit: BPF prog-id=52 op=LOAD Oct 2 19:17:55.395000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.395000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.399000 audit: BPF prog-id=53 op=LOAD Oct 2 19:17:55.399000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:17:55.399000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:17:55.401000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.401000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.401000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.401000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit: BPF prog-id=54 op=LOAD Oct 2 19:17:55.403000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit: BPF prog-id=55 op=LOAD Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit: BPF prog-id=56 op=LOAD Oct 2 19:17:55.403000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:17:55.403000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.403000 audit: BPF prog-id=57 op=LOAD Oct 2 19:17:55.403000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:17:55.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit: BPF prog-id=58 op=LOAD Oct 2 19:17:55.405000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.405000 audit: BPF prog-id=59 op=LOAD Oct 2 19:17:55.406000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit: BPF prog-id=60 op=LOAD Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.406000 audit: BPF prog-id=61 op=LOAD Oct 2 19:17:55.406000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:17:55.406000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.407000 audit: BPF prog-id=62 op=LOAD Oct 2 19:17:55.407000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.408000 audit: BPF prog-id=63 op=LOAD Oct 2 19:17:55.408000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:17:55.419142 systemd[1]: Started kubelet.service. Oct 2 19:17:55.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:55.474572 kubelet[1411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:17:55.474572 kubelet[1411]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:17:55.474572 kubelet[1411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:17:55.474572 kubelet[1411]: I1002 19:17:55.474554 1411 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:17:55.774348 kubelet[1411]: I1002 19:17:55.774167 1411 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Oct 2 19:17:55.774348 kubelet[1411]: I1002 19:17:55.774198 1411 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:17:55.774797 kubelet[1411]: I1002 19:17:55.774776 1411 server.go:837] "Client rotation is on, will bootstrap in background" Oct 2 19:17:55.777692 kubelet[1411]: I1002 19:17:55.777667 1411 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:17:55.781321 kubelet[1411]: I1002 19:17:55.781302 1411 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:17:55.781559 kubelet[1411]: I1002 19:17:55.781541 1411 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:17:55.781613 kubelet[1411]: I1002 19:17:55.781604 1411 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:17:55.781709 kubelet[1411]: I1002 19:17:55.781630 1411 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:17:55.781709 kubelet[1411]: I1002 19:17:55.781639 1411 container_manager_linux.go:302] "Creating device plugin manager" Oct 2 19:17:55.781764 kubelet[1411]: I1002 19:17:55.781736 1411 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:17:55.787040 kubelet[1411]: I1002 19:17:55.787014 1411 kubelet.go:405] "Attempting to sync node with API server" Oct 2 19:17:55.787147 kubelet[1411]: I1002 19:17:55.787083 1411 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:17:55.787147 kubelet[1411]: I1002 19:17:55.787112 1411 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:17:55.787147 kubelet[1411]: I1002 19:17:55.787139 1411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:17:55.787351 kubelet[1411]: E1002 19:17:55.787327 1411 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:55.787401 kubelet[1411]: E1002 19:17:55.787381 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:55.788088 kubelet[1411]: I1002 19:17:55.788067 1411 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:17:55.788473 kubelet[1411]: W1002 19:17:55.788456 1411 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:17:55.789064 kubelet[1411]: I1002 19:17:55.789041 1411 server.go:1168] "Started kubelet" Oct 2 19:17:55.789166 kubelet[1411]: I1002 19:17:55.789142 1411 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:17:55.789294 kubelet[1411]: I1002 19:17:55.789269 1411 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:17:55.789970 kubelet[1411]: E1002 19:17:55.789912 1411 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:17:55.790019 kubelet[1411]: E1002 19:17:55.789982 1411 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:17:55.789000 audit[1411]: AVC avc: denied { mac_admin } for pid=1411 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.789000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:17:55.789000 audit[1411]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b179b0 a1=c00067ea68 a2=c000b17980 a3=25 items=0 ppid=1 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.789000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:17:55.789000 audit[1411]: AVC avc: denied { mac_admin } for pid=1411 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.789000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:17:55.789000 audit[1411]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000743980 a1=c00067ea80 a2=c000b17a40 a3=25 items=0 ppid=1 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.789000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:17:55.791523 kubelet[1411]: I1002 19:17:55.791042 1411 kubelet.go:1355] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:17:55.791523 kubelet[1411]: I1002 19:17:55.791092 1411 kubelet.go:1359] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:17:55.791523 kubelet[1411]: I1002 19:17:55.791109 1411 server.go:461] "Adding debug handlers to kubelet server" Oct 2 19:17:55.791523 kubelet[1411]: I1002 19:17:55.791195 1411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:17:55.793828 kubelet[1411]: W1002 19:17:55.793132 1411 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:17:55.793828 kubelet[1411]: E1002 19:17:55.793161 1411 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:17:55.793828 kubelet[1411]: W1002 19:17:55.793190 1411 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.149" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:17:55.793828 kubelet[1411]: E1002 19:17:55.793198 1411 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.149" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:17:55.794567 kubelet[1411]: E1002 19:17:55.794439 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a60777209923f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 789017663, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 789017663, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:55.794732 kubelet[1411]: E1002 19:17:55.794717 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:55.794787 kubelet[1411]: I1002 19:17:55.794753 1411 volume_manager.go:284] "Starting Kubelet Volume Manager" Oct 2 19:17:55.794859 kubelet[1411]: I1002 19:17:55.794848 1411 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Oct 2 19:17:55.795405 kubelet[1411]: E1002 19:17:55.795294 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a607772180e55", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 789966933, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 789966933, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:55.795858 kubelet[1411]: W1002 19:17:55.795670 1411 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:17:55.795858 kubelet[1411]: E1002 19:17:55.795686 1411 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:17:55.796168 kubelet[1411]: E1002 19:17:55.796151 1411 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.149\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:17:55.816163 kubelet[1411]: I1002 19:17:55.815885 1411 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:17:55.816163 kubelet[1411]: I1002 19:17:55.815905 1411 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:17:55.816163 kubelet[1411]: I1002 19:17:55.815923 1411 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:17:55.816163 kubelet[1411]: E1002 19:17:55.815749 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a6077739548e0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.149 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814951136, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814951136, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:55.816873 kubelet[1411]: E1002 19:17:55.816827 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a607773955ff7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.149 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814957047, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814957047, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:55.817540 kubelet[1411]: E1002 19:17:55.817497 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a607773956ce2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.149 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814960354, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814960354, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:55.820000 audit[1427]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:17:55.820000 audit[1427]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcd0749f20 a2=0 a3=7ffcd0749f0c items=0 ppid=1411 pid=1427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:17:55.821000 audit[1430]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:17:55.821000 audit[1430]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe77456d00 a2=0 a3=7ffe77456cec items=0 ppid=1411 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:17:55.823000 audit[1432]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:17:55.823000 audit[1432]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff22baa4e0 a2=0 a3=7fff22baa4cc items=0 ppid=1411 pid=1432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:17:55.876000 audit[1437]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:17:55.876000 audit[1437]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdd6e48960 a2=0 a3=7ffdd6e4894c items=0 ppid=1411 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:17:55.896357 kubelet[1411]: I1002 19:17:55.896333 1411 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.149" Oct 2 19:17:55.897803 kubelet[1411]: E1002 19:17:55.897777 1411 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.149" Oct 2 19:17:55.898448 kubelet[1411]: E1002 19:17:55.898346 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a6077739548e0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.149 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814951136, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 896283402, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.149.178a6077739548e0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:55.899575 kubelet[1411]: E1002 19:17:55.899475 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a607773955ff7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.149 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814957047, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 896296417, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.149.178a607773955ff7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:55.900377 kubelet[1411]: I1002 19:17:55.900328 1411 policy_none.go:49] "None policy: Start" Oct 2 19:17:55.900790 kubelet[1411]: E1002 19:17:55.900557 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a607773956ce2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.149 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814960354, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 896300314, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.149.178a607773956ce2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:55.901269 kubelet[1411]: I1002 19:17:55.901246 1411 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:17:55.901269 kubelet[1411]: I1002 19:17:55.901281 1411 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:17:55.912000 audit[1442]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:17:55.912000 audit[1442]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffab105410 a2=0 a3=7fffab1053fc items=0 ppid=1411 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:17:55.914011 kubelet[1411]: I1002 19:17:55.913880 1411 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:17:55.913000 audit[1443]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:17:55.913000 audit[1443]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcff70b410 a2=0 a3=7ffcff70b3fc items=0 ppid=1411 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:17:55.914889 kubelet[1411]: I1002 19:17:55.914820 1411 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:17:55.914889 kubelet[1411]: I1002 19:17:55.914867 1411 status_manager.go:207] "Starting to sync pod status with apiserver" Oct 2 19:17:55.914949 kubelet[1411]: I1002 19:17:55.914899 1411 kubelet.go:2257] "Starting kubelet main sync loop" Oct 2 19:17:55.914991 kubelet[1411]: E1002 19:17:55.914978 1411 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 2 19:17:55.913000 audit[1444]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:17:55.913000 audit[1444]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfce1ca70 a2=0 a3=7ffdfce1ca5c items=0 ppid=1411 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:17:55.914000 audit[1445]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=1445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:17:55.914000 audit[1445]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedd1a1000 a2=0 a3=7ffedd1a0fec items=0 ppid=1411 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:17:55.915000 audit[1446]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:17:55.915000 audit[1446]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff5d3430b0 a2=0 a3=7fff5d34309c items=0 ppid=1411 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:17:55.915000 audit[1447]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=1447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:17:55.915000 audit[1447]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc4dcc68a0 a2=0 a3=7ffc4dcc688c items=0 ppid=1411 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:17:55.916979 kubelet[1411]: W1002 19:17:55.916921 1411 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:17:55.916979 kubelet[1411]: E1002 19:17:55.916946 1411 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:17:55.916000 audit[1449]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:17:55.916000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe287c8fa0 a2=0 a3=7ffe287c8f8c items=0 ppid=1411 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:17:55.916000 audit[1450]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:17:55.916000 audit[1450]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe22b7db40 a2=0 a3=7ffe22b7db2c items=0 ppid=1411 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:17:55.922390 systemd[1]: Created slice kubepods.slice. Oct 2 19:17:55.926348 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:17:55.928914 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:17:55.938903 kubelet[1411]: I1002 19:17:55.938877 1411 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:17:55.937000 audit[1411]: AVC avc: denied { mac_admin } for pid=1411 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:17:55.937000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:17:55.937000 audit[1411]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f5d9e0 a1=c0005412a8 a2=c000f5d9b0 a3=25 items=0 ppid=1 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:17:55.937000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:17:55.939235 kubelet[1411]: I1002 19:17:55.938962 1411 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:17:55.939282 kubelet[1411]: I1002 19:17:55.939253 1411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:17:55.940073 kubelet[1411]: E1002 19:17:55.940042 1411 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.149\" not found" Oct 2 19:17:55.942094 kubelet[1411]: E1002 19:17:55.941996 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a60777b0ef50a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 940365578, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 940365578, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:55.999002 kubelet[1411]: E1002 19:17:55.998926 1411 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.149\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:17:56.100294 kubelet[1411]: I1002 19:17:56.100110 1411 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.149" Oct 2 19:17:56.102173 kubelet[1411]: E1002 19:17:56.102094 1411 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.149" Oct 2 19:17:56.102382 kubelet[1411]: E1002 19:17:56.102089 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a6077739548e0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.149 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814951136, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 56, 100035613, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.149.178a6077739548e0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:56.104402 kubelet[1411]: E1002 19:17:56.104339 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a607773955ff7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.149 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814957047, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 56, 100057104, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.149.178a607773955ff7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:56.105419 kubelet[1411]: E1002 19:17:56.105342 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a607773956ce2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.149 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814960354, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 56, 100060941, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.149.178a607773956ce2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:56.401281 kubelet[1411]: E1002 19:17:56.401117 1411 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.149\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:17:56.503596 kubelet[1411]: I1002 19:17:56.503558 1411 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.149" Oct 2 19:17:56.505298 kubelet[1411]: E1002 19:17:56.505246 1411 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.149" Oct 2 19:17:56.505298 kubelet[1411]: E1002 19:17:56.505224 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a6077739548e0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.149 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814951136, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 56, 503514765, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.149.178a6077739548e0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:56.506262 kubelet[1411]: E1002 19:17:56.506183 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a607773955ff7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.149 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814957047, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 56, 503526377, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.149.178a607773955ff7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:56.507089 kubelet[1411]: E1002 19:17:56.507036 1411 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149.178a607773956ce2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.149", UID:"10.0.0.149", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.149 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.149"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 17, 55, 814960354, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 17, 56, 503529202, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.149.178a607773956ce2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:17:56.605144 kubelet[1411]: W1002 19:17:56.605088 1411 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:17:56.605144 kubelet[1411]: E1002 19:17:56.605136 1411 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:17:56.751349 kubelet[1411]: W1002 19:17:56.751198 1411 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:17:56.751349 kubelet[1411]: E1002 19:17:56.751258 1411 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:17:56.758516 kubelet[1411]: W1002 19:17:56.758489 1411 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.149" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:17:56.758516 kubelet[1411]: E1002 19:17:56.758516 1411 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.149" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:17:56.773478 kubelet[1411]: W1002 19:17:56.773419 1411 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:17:56.773478 kubelet[1411]: E1002 19:17:56.773464 1411 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:17:56.777556 kubelet[1411]: I1002 19:17:56.777516 1411 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:17:56.787841 kubelet[1411]: E1002 19:17:56.787811 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:57.144042 kubelet[1411]: E1002 19:17:57.143872 1411 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.149" not found Oct 2 19:17:57.205658 kubelet[1411]: E1002 19:17:57.205596 1411 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.149\" not found" node="10.0.0.149" Oct 2 19:17:57.307238 kubelet[1411]: I1002 19:17:57.307187 1411 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.149" Oct 2 19:17:57.311235 kubelet[1411]: I1002 19:17:57.311138 1411 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.149" Oct 2 19:17:57.506353 kubelet[1411]: E1002 19:17:57.506182 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:57.606941 kubelet[1411]: E1002 19:17:57.606862 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:57.616011 sudo[1230]: pam_unix(sudo:session): session closed for user root Oct 2 19:17:57.614000 audit[1230]: USER_END pid=1230 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:17:57.614000 audit[1230]: CRED_DISP pid=1230 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:17:57.617639 sshd[1227]: pam_unix(sshd:session): session closed for user core Oct 2 19:17:57.617000 audit[1227]: USER_END pid=1227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:17:57.617000 audit[1227]: CRED_DISP pid=1227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:17:57.620471 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:37088.service: Deactivated successfully. Oct 2 19:17:57.621546 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:17:57.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.149:22-10.0.0.1:37088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:17:57.622470 systemd-logind[1089]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:17:57.623428 systemd-logind[1089]: Removed session 7. Oct 2 19:17:57.707385 kubelet[1411]: E1002 19:17:57.707295 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:57.789129 kubelet[1411]: E1002 19:17:57.788980 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:57.807738 kubelet[1411]: E1002 19:17:57.807642 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:57.908586 kubelet[1411]: E1002 19:17:57.908372 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.009397 kubelet[1411]: E1002 19:17:58.009333 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.110074 kubelet[1411]: E1002 19:17:58.109925 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.210615 kubelet[1411]: E1002 19:17:58.210546 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.311364 kubelet[1411]: E1002 19:17:58.311297 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.412440 kubelet[1411]: E1002 19:17:58.412250 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.512687 kubelet[1411]: E1002 19:17:58.512631 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.613572 kubelet[1411]: E1002 19:17:58.613468 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.714357 kubelet[1411]: E1002 19:17:58.714093 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.789656 kubelet[1411]: E1002 19:17:58.789561 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:58.815325 kubelet[1411]: E1002 19:17:58.815252 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:58.916191 kubelet[1411]: E1002 19:17:58.916107 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:59.017145 kubelet[1411]: E1002 19:17:59.016872 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:59.117613 kubelet[1411]: E1002 19:17:59.117511 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:59.218444 kubelet[1411]: E1002 19:17:59.218371 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:59.319445 kubelet[1411]: E1002 19:17:59.319271 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:59.420139 kubelet[1411]: E1002 19:17:59.420035 1411 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Oct 2 19:17:59.521365 kubelet[1411]: I1002 19:17:59.521319 1411 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:17:59.521875 env[1105]: time="2023-10-02T19:17:59.521801759Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:17:59.522085 kubelet[1411]: I1002 19:17:59.522066 1411 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:17:59.789885 kubelet[1411]: I1002 19:17:59.789658 1411 apiserver.go:52] "Watching apiserver" Oct 2 19:17:59.789885 kubelet[1411]: E1002 19:17:59.789764 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:59.792368 kubelet[1411]: I1002 19:17:59.792312 1411 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:17:59.792576 kubelet[1411]: I1002 19:17:59.792425 1411 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:17:59.796386 kubelet[1411]: I1002 19:17:59.796347 1411 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Oct 2 19:17:59.798845 systemd[1]: Created slice kubepods-besteffort-pod643d0a9e_3c62_4e3a_a739_2f6051edd110.slice. Oct 2 19:17:59.813581 systemd[1]: Created slice kubepods-burstable-pod0773655e_b50d_44c3_8cac_6a7eedfd7601.slice. Oct 2 19:17:59.816577 kubelet[1411]: I1002 19:17:59.816527 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-cgroup\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.816682 kubelet[1411]: I1002 19:17:59.816585 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-config-path\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.816682 kubelet[1411]: I1002 19:17:59.816623 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/643d0a9e-3c62-4e3a-a739-2f6051edd110-kube-proxy\") pod \"kube-proxy-8phxk\" (UID: \"643d0a9e-3c62-4e3a-a739-2f6051edd110\") " pod="kube-system/kube-proxy-8phxk" Oct 2 19:17:59.816682 kubelet[1411]: I1002 19:17:59.816667 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxgj\" (UniqueName: \"kubernetes.io/projected/643d0a9e-3c62-4e3a-a739-2f6051edd110-kube-api-access-8hxgj\") pod \"kube-proxy-8phxk\" (UID: \"643d0a9e-3c62-4e3a-a739-2f6051edd110\") " pod="kube-system/kube-proxy-8phxk" Oct 2 19:17:59.816786 kubelet[1411]: I1002 19:17:59.816752 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-lib-modules\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.816830 kubelet[1411]: I1002 19:17:59.816801 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-xtables-lock\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.816868 kubelet[1411]: I1002 19:17:59.816849 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/643d0a9e-3c62-4e3a-a739-2f6051edd110-lib-modules\") pod \"kube-proxy-8phxk\" (UID: \"643d0a9e-3c62-4e3a-a739-2f6051edd110\") " pod="kube-system/kube-proxy-8phxk" Oct 2 19:17:59.816918 kubelet[1411]: I1002 19:17:59.816901 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-bpf-maps\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.816956 kubelet[1411]: I1002 19:17:59.816935 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-hostproc\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.816994 kubelet[1411]: I1002 19:17:59.816961 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-etc-cni-netd\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.817031 kubelet[1411]: I1002 19:17:59.817005 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0773655e-b50d-44c3-8cac-6a7eedfd7601-hubble-tls\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.817069 kubelet[1411]: I1002 19:17:59.817046 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-run\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.817104 kubelet[1411]: I1002 19:17:59.817091 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cni-path\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.817152 kubelet[1411]: I1002 19:17:59.817137 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0773655e-b50d-44c3-8cac-6a7eedfd7601-clustermesh-secrets\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.817191 kubelet[1411]: I1002 19:17:59.817171 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-host-proc-sys-net\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.817250 kubelet[1411]: I1002 19:17:59.817222 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-host-proc-sys-kernel\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.817291 kubelet[1411]: I1002 19:17:59.817272 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlzp\" (UniqueName: \"kubernetes.io/projected/0773655e-b50d-44c3-8cac-6a7eedfd7601-kube-api-access-7hlzp\") pod \"cilium-w5dwm\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " pod="kube-system/cilium-w5dwm" Oct 2 19:17:59.817327 kubelet[1411]: I1002 19:17:59.817299 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/643d0a9e-3c62-4e3a-a739-2f6051edd110-xtables-lock\") pod \"kube-proxy-8phxk\" (UID: \"643d0a9e-3c62-4e3a-a739-2f6051edd110\") " pod="kube-system/kube-proxy-8phxk" Oct 2 19:17:59.817359 kubelet[1411]: I1002 19:17:59.817322 1411 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:18:00.112313 kubelet[1411]: E1002 19:18:00.112141 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:00.113024 env[1105]: time="2023-10-02T19:18:00.112973187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8phxk,Uid:643d0a9e-3c62-4e3a-a739-2f6051edd110,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:00.127352 kubelet[1411]: E1002 19:18:00.127309 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:00.128045 env[1105]: time="2023-10-02T19:18:00.127974305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5dwm,Uid:0773655e-b50d-44c3-8cac-6a7eedfd7601,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:00.790569 kubelet[1411]: E1002 19:18:00.790481 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:01.093449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489813004.mount: Deactivated successfully. Oct 2 19:18:01.099376 env[1105]: time="2023-10-02T19:18:01.099320313Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:01.101932 env[1105]: time="2023-10-02T19:18:01.101894972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:01.105464 env[1105]: time="2023-10-02T19:18:01.105403803Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:01.107177 env[1105]: time="2023-10-02T19:18:01.107118910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:01.109581 env[1105]: time="2023-10-02T19:18:01.109511338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:01.111291 env[1105]: time="2023-10-02T19:18:01.111259447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:01.112849 env[1105]: time="2023-10-02T19:18:01.112821196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:01.114635 env[1105]: time="2023-10-02T19:18:01.114603810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:01.137225 env[1105]: time="2023-10-02T19:18:01.137115906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:01.137225 env[1105]: time="2023-10-02T19:18:01.137157975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:01.137225 env[1105]: time="2023-10-02T19:18:01.137178714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:01.137599 env[1105]: time="2023-10-02T19:18:01.137565900Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e464ff3a4d31180c54f7578b73a62c253af58d7b971c493604ce5c5bf9b78409 pid=1466 runtime=io.containerd.runc.v2 Oct 2 19:18:01.366618 env[1105]: time="2023-10-02T19:18:01.366423351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:01.366618 env[1105]: time="2023-10-02T19:18:01.366474938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:01.366618 env[1105]: time="2023-10-02T19:18:01.366506407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:01.367189 env[1105]: time="2023-10-02T19:18:01.367137912Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620 pid=1480 runtime=io.containerd.runc.v2 Oct 2 19:18:01.380876 systemd[1]: Started cri-containerd-e464ff3a4d31180c54f7578b73a62c253af58d7b971c493604ce5c5bf9b78409.scope. Oct 2 19:18:01.383493 systemd[1]: Started cri-containerd-20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620.scope. Oct 2 19:18:01.418014 kernel: kauditd_printk_skb: 216 callbacks suppressed Oct 2 19:18:01.418188 kernel: audit: type=1400 audit(1696274281.403:551): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.418228 kernel: audit: type=1400 audit(1696274281.403:552): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.418426 kernel: audit: type=1400 audit(1696274281.403:553): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.418443 kernel: audit: type=1400 audit(1696274281.403:554): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.418457 kernel: audit: type=1400 audit(1696274281.403:555): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.422912 kernel: audit: type=1400 audit(1696274281.403:556): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.422983 kernel: audit: type=1400 audit(1696274281.403:557): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.425227 kernel: audit: type=1400 audit(1696274281.403:558): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.427391 kernel: audit: type=1400 audit(1696274281.403:559): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit: BPF prog-id=64 op=LOAD Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1480 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.447321 kernel: audit: type=1400 audit(1696274281.443:560): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623134613732336230396231306330303139666433316531643366 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1480 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623134613732336230396231306330303139666433316531643366 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.443000 audit: BPF prog-id=65 op=LOAD Oct 2 19:18:01.443000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001459d8 a2=78 a3=c0001db560 items=0 ppid=1480 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623134613732336230396231306330303139666433316531643366 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit: BPF prog-id=66 op=LOAD Oct 2 19:18:01.446000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c0001db5a8 items=0 ppid=1480 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623134613732336230396231306330303139666433316531643366 Oct 2 19:18:01.446000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:18:01.446000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { perfmon } for pid=1498 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit[1498]: AVC avc: denied { bpf } for pid=1498 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.446000 audit: BPF prog-id=67 op=LOAD Oct 2 19:18:01.446000 audit[1498]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c0001db9b8 items=0 ppid=1480 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623134613732336230396231306330303139666433316531643366 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.451000 audit: BPF prog-id=68 op=LOAD Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1466 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363466663361346433313138306335346637353738623733613632 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1466 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363466663361346433313138306335346637353738623733613632 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.452000 audit: BPF prog-id=69 op=LOAD Oct 2 19:18:01.452000 audit[1483]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000185360 items=0 ppid=1466 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363466663361346433313138306335346637353738623733613632 Oct 2 19:18:01.453000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.453000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.453000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.453000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.453000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.453000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.453000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.453000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.453000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.453000 audit: BPF prog-id=70 op=LOAD Oct 2 19:18:01.453000 audit[1483]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0001853a8 items=0 ppid=1466 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363466663361346433313138306335346637353738623733613632 Oct 2 19:18:01.454000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:18:01.454000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { perfmon } for pid=1483 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit[1483]: AVC avc: denied { bpf } for pid=1483 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:01.454000 audit: BPF prog-id=71 op=LOAD Oct 2 19:18:01.454000 audit[1483]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0001857b8 items=0 ppid=1466 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:01.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534363466663361346433313138306335346637353738623733613632 Oct 2 19:18:01.467783 env[1105]: time="2023-10-02T19:18:01.467710277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5dwm,Uid:0773655e-b50d-44c3-8cac-6a7eedfd7601,Namespace:kube-system,Attempt:0,} returns sandbox id \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\"" Oct 2 19:18:01.468778 kubelet[1411]: E1002 19:18:01.468748 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:01.471170 env[1105]: time="2023-10-02T19:18:01.471135782Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:18:01.471536 env[1105]: time="2023-10-02T19:18:01.471484226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8phxk,Uid:643d0a9e-3c62-4e3a-a739-2f6051edd110,Namespace:kube-system,Attempt:0,} returns sandbox id \"e464ff3a4d31180c54f7578b73a62c253af58d7b971c493604ce5c5bf9b78409\"" Oct 2 19:18:01.472087 kubelet[1411]: E1002 19:18:01.472055 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:01.791195 kubelet[1411]: E1002 19:18:01.791074 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:02.791911 kubelet[1411]: E1002 19:18:02.791856 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:03.793058 kubelet[1411]: E1002 19:18:03.792999 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:04.793947 kubelet[1411]: E1002 19:18:04.793892 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:05.794260 kubelet[1411]: E1002 19:18:05.794166 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:06.795206 kubelet[1411]: E1002 19:18:06.795152 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:07.796286 kubelet[1411]: E1002 19:18:07.796246 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:08.175696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2072869940.mount: Deactivated successfully. Oct 2 19:18:08.797390 kubelet[1411]: E1002 19:18:08.797303 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:09.797954 kubelet[1411]: E1002 19:18:09.797904 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:10.798600 kubelet[1411]: E1002 19:18:10.798551 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:11.799665 kubelet[1411]: E1002 19:18:11.799604 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:12.722406 env[1105]: time="2023-10-02T19:18:12.722318745Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:12.725042 env[1105]: time="2023-10-02T19:18:12.725003922Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:12.726626 env[1105]: time="2023-10-02T19:18:12.726599174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:12.727054 env[1105]: time="2023-10-02T19:18:12.727027548Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 19:18:12.727926 env[1105]: time="2023-10-02T19:18:12.727885828Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\"" Oct 2 19:18:12.729370 env[1105]: time="2023-10-02T19:18:12.729313495Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:18:12.799867 kubelet[1411]: E1002 19:18:12.799830 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:13.043587 env[1105]: time="2023-10-02T19:18:13.043484200Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\"" Oct 2 19:18:13.044186 env[1105]: time="2023-10-02T19:18:13.044140351Z" level=info msg="StartContainer for \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\"" Oct 2 19:18:13.063647 systemd[1]: Started cri-containerd-b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568.scope. Oct 2 19:18:13.105105 systemd[1]: cri-containerd-b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568.scope: Deactivated successfully. Oct 2 19:18:13.108252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568-rootfs.mount: Deactivated successfully. Oct 2 19:18:13.800962 kubelet[1411]: E1002 19:18:13.800893 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:13.994982 env[1105]: time="2023-10-02T19:18:13.994911365Z" level=info msg="shim disconnected" id=b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568 Oct 2 19:18:13.994982 env[1105]: time="2023-10-02T19:18:13.994974303Z" level=warning msg="cleaning up after shim disconnected" id=b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568 namespace=k8s.io Oct 2 19:18:13.994982 env[1105]: time="2023-10-02T19:18:13.994992437Z" level=info msg="cleaning up dead shim" Oct 2 19:18:14.003017 env[1105]: time="2023-10-02T19:18:14.002948431Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1566 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:14.003346 env[1105]: time="2023-10-02T19:18:14.003234126Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed" Oct 2 19:18:14.004601 env[1105]: time="2023-10-02T19:18:14.004537812Z" level=error msg="Failed to pipe stdout of container \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\"" error="reading from a closed fifo" Oct 2 19:18:14.004904 env[1105]: time="2023-10-02T19:18:14.004525148Z" level=error msg="Failed to pipe stderr of container \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\"" error="reading from a closed fifo" Oct 2 19:18:14.009865 env[1105]: time="2023-10-02T19:18:14.009765928Z" level=error msg="StartContainer for \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:14.010256 kubelet[1411]: E1002 19:18:14.010232 1411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568" Oct 2 19:18:14.010426 kubelet[1411]: E1002 19:18:14.010396 1411 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:14.010426 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:14.010426 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:18:14.010633 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7hlzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:14.010633 kubelet[1411]: E1002 19:18:14.010486 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:18:14.731425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320017465.mount: Deactivated successfully. Oct 2 19:18:14.801940 kubelet[1411]: E1002 19:18:14.801882 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:14.952957 kubelet[1411]: E1002 19:18:14.952923 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:14.954598 env[1105]: time="2023-10-02T19:18:14.954556896Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:18:14.971356 env[1105]: time="2023-10-02T19:18:14.971288089Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\"" Oct 2 19:18:14.971928 env[1105]: time="2023-10-02T19:18:14.971880861Z" level=info msg="StartContainer for \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\"" Oct 2 19:18:15.012284 systemd[1]: Started cri-containerd-c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f.scope. Oct 2 19:18:15.033995 systemd[1]: cri-containerd-c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f.scope: Deactivated successfully. Oct 2 19:18:15.176507 env[1105]: time="2023-10-02T19:18:15.176421140Z" level=info msg="shim disconnected" id=c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f Oct 2 19:18:15.176507 env[1105]: time="2023-10-02T19:18:15.176501471Z" level=warning msg="cleaning up after shim disconnected" id=c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f namespace=k8s.io Oct 2 19:18:15.176940 env[1105]: time="2023-10-02T19:18:15.176520717Z" level=info msg="cleaning up dead shim" Oct 2 19:18:15.186554 env[1105]: time="2023-10-02T19:18:15.186473966Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1603 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:15Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:15.186907 env[1105]: time="2023-10-02T19:18:15.186823562Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:18:15.192357 env[1105]: time="2023-10-02T19:18:15.192286288Z" level=error msg="Failed to pipe stdout of container \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\"" error="reading from a closed fifo" Oct 2 19:18:15.192473 env[1105]: time="2023-10-02T19:18:15.192301507Z" level=error msg="Failed to pipe stderr of container \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\"" error="reading from a closed fifo" Oct 2 19:18:15.196269 env[1105]: time="2023-10-02T19:18:15.196168741Z" level=error msg="StartContainer for \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:15.196547 kubelet[1411]: E1002 19:18:15.196514 1411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f" Oct 2 19:18:15.196676 kubelet[1411]: E1002 19:18:15.196641 1411 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:15.196676 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:15.196676 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:18:15.196676 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7hlzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:15.196676 kubelet[1411]: E1002 19:18:15.196677 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:18:15.361601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f-rootfs.mount: Deactivated successfully. Oct 2 19:18:15.405805 env[1105]: time="2023-10-02T19:18:15.405715811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:15.407391 env[1105]: time="2023-10-02T19:18:15.407342723Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:15.409240 env[1105]: time="2023-10-02T19:18:15.409162456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:15.410473 env[1105]: time="2023-10-02T19:18:15.410441505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8e9eff2f6d0b398f9ac5f5a15c1cb7d5f468f28d64a78d593d57f72a969a54ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:15.410924 env[1105]: time="2023-10-02T19:18:15.410884456Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\" returns image reference \"sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985\"" Oct 2 19:18:15.412765 env[1105]: time="2023-10-02T19:18:15.412729717Z" level=info msg="CreateContainer within sandbox \"e464ff3a4d31180c54f7578b73a62c253af58d7b971c493604ce5c5bf9b78409\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:18:15.428403 env[1105]: time="2023-10-02T19:18:15.428320311Z" level=info msg="CreateContainer within sandbox \"e464ff3a4d31180c54f7578b73a62c253af58d7b971c493604ce5c5bf9b78409\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b3ab132027f517d909c688be8352003c08ab420d626a5e0897aff4b74dafb59d\"" Oct 2 19:18:15.429039 env[1105]: time="2023-10-02T19:18:15.428943660Z" level=info msg="StartContainer for \"b3ab132027f517d909c688be8352003c08ab420d626a5e0897aff4b74dafb59d\"" Oct 2 19:18:15.452280 systemd[1]: Started cri-containerd-b3ab132027f517d909c688be8352003c08ab420d626a5e0897aff4b74dafb59d.scope. Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475660 kernel: kauditd_printk_skb: 104 callbacks suppressed Oct 2 19:18:15.475759 kernel: audit: type=1400 audit(1696274295.471:587): avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475799 kernel: audit: type=1300 audit(1696274295.471:587): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1466 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.471000 audit[1622]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1466 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233616231333230323766353137643930396336383862653833353230 Oct 2 19:18:15.481460 kernel: audit: type=1327 audit(1696274295.471:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233616231333230323766353137643930396336383862653833353230 Oct 2 19:18:15.481519 kernel: audit: type=1400 audit(1696274295.471:588): avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.485651 kernel: audit: type=1400 audit(1696274295.471:588): avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.485708 kernel: audit: type=1400 audit(1696274295.471:588): avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.489829 kernel: audit: type=1400 audit(1696274295.471:588): avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.489887 kernel: audit: type=1400 audit(1696274295.471:588): avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.491783 kernel: audit: type=1400 audit(1696274295.471:588): avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.495749 kernel: audit: type=1400 audit(1696274295.471:588): avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.471000 audit: BPF prog-id=72 op=LOAD Oct 2 19:18:15.471000 audit[1622]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0003a8710 items=0 ppid=1466 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233616231333230323766353137643930396336383862653833353230 Oct 2 19:18:15.475000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.475000 audit: BPF prog-id=73 op=LOAD Oct 2 19:18:15.475000 audit[1622]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0003a8758 items=0 ppid=1466 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233616231333230323766353137643930396336383862653833353230 Oct 2 19:18:15.478000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:18:15.478000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { perfmon } for pid=1622 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit[1622]: AVC avc: denied { bpf } for pid=1622 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:15.478000 audit: BPF prog-id=74 op=LOAD Oct 2 19:18:15.478000 audit[1622]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0003a87e8 items=0 ppid=1466 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233616231333230323766353137643930396336383862653833353230 Oct 2 19:18:15.503457 env[1105]: time="2023-10-02T19:18:15.503403175Z" level=info msg="StartContainer for \"b3ab132027f517d909c688be8352003c08ab420d626a5e0897aff4b74dafb59d\" returns successfully" Oct 2 19:18:15.550000 audit[1672]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.550000 audit[1672]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda9a7fa70 a2=0 a3=7ffda9a7fa5c items=0 ppid=1633 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:18:15.552000 audit[1673]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_chain pid=1673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.552000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff55e11270 a2=0 a3=7fff55e1125c items=0 ppid=1633 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:18:15.553000 audit[1674]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_chain pid=1674 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.553000 audit[1674]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff45e1a9a0 a2=0 a3=7fff45e1a98c items=0 ppid=1633 pid=1674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:18:15.560000 audit[1675]: NETFILTER_CFG table=mangle:17 family=10 entries=1 op=nft_register_chain pid=1675 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.560000 audit[1675]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe70ba0b30 a2=0 a3=7ffe70ba0b1c items=0 ppid=1633 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:18:15.561000 audit[1677]: NETFILTER_CFG table=nat:18 family=10 entries=1 op=nft_register_chain pid=1677 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.561000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff61033460 a2=0 a3=7fff6103344c items=0 ppid=1633 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.561000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:18:15.562000 audit[1678]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1678 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.562000 audit[1678]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda08c22a0 a2=0 a3=7ffda08c228c items=0 ppid=1633 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:18:15.656000 audit[1679]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.656000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd82676d60 a2=0 a3=7ffd82676d4c items=0 ppid=1633 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:18:15.659000 audit[1681]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.659000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeb3ce4a90 a2=0 a3=7ffeb3ce4a7c items=0 ppid=1633 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:18:15.662000 audit[1684]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.662000 audit[1684]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffc4e4e7a0 a2=0 a3=7fffc4e4e78c items=0 ppid=1633 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:18:15.663000 audit[1685]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.663000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4f5b8430 a2=0 a3=7ffc4f5b841c items=0 ppid=1633 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:18:15.666000 audit[1687]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.666000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7a2737c0 a2=0 a3=7ffe7a2737ac items=0 ppid=1633 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:18:15.667000 audit[1688]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.667000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaeda1580 a2=0 a3=7ffeaeda156c items=0 ppid=1633 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:18:15.674000 audit[1690]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.674000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc192ceba0 a2=0 a3=7ffc192ceb8c items=0 ppid=1633 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.674000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:18:15.677000 audit[1693]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.677000 audit[1693]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc25d0b160 a2=0 a3=7ffc25d0b14c items=0 ppid=1633 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:18:15.679000 audit[1694]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.679000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe805e47f0 a2=0 a3=7ffe805e47dc items=0 ppid=1633 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:18:15.681000 audit[1696]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.681000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf60f18a0 a2=0 a3=7ffcf60f188c items=0 ppid=1633 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:18:15.682000 audit[1697]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.682000 audit[1697]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffcbfe6840 a2=0 a3=7fffcbfe682c items=0 ppid=1633 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:18:15.685000 audit[1699]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.685000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb3ce8010 a2=0 a3=7fffb3ce7ffc items=0 ppid=1633 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:18:15.688000 audit[1702]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.688000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde9bc7a90 a2=0 a3=7ffde9bc7a7c items=0 ppid=1633 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.688000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:18:15.692000 audit[1705]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.692000 audit[1705]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdaeb5e420 a2=0 a3=7ffdaeb5e40c items=0 ppid=1633 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:18:15.693000 audit[1706]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.693000 audit[1706]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcce59b670 a2=0 a3=7ffcce59b65c items=0 ppid=1633 pid=1706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:18:15.695000 audit[1708]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.695000 audit[1708]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffeaf66ccb0 a2=0 a3=7ffeaf66cc9c items=0 ppid=1633 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:18:15.717000 audit[1714]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.717000 audit[1714]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff287d5470 a2=0 a3=7fff287d545c items=0 ppid=1633 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:18:15.722000 audit[1719]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.722000 audit[1719]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe453c0770 a2=0 a3=7ffe453c075c items=0 ppid=1633 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:18:15.724000 audit[1721]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:18:15.724000 audit[1721]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffef05237c0 a2=0 a3=7ffef05237ac items=0 ppid=1633 pid=1721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:18:15.733000 audit[1723]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:18:15.733000 audit[1723]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffcb8ba7820 a2=0 a3=7ffcb8ba780c items=0 ppid=1633 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:18:15.746000 audit[1723]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:18:15.746000 audit[1723]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffcb8ba7820 a2=0 a3=7ffcb8ba780c items=0 ppid=1633 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:18:15.747000 audit[1729]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1729 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.747000 audit[1729]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdd49bf980 a2=0 a3=7ffdd49bf96c items=0 ppid=1633 pid=1729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:18:15.749000 audit[1731]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1731 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.749000 audit[1731]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff6c0bd7a0 a2=0 a3=7fff6c0bd78c items=0 ppid=1633 pid=1731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.749000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:18:15.753000 audit[1734]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.753000 audit[1734]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd96b329d0 a2=0 a3=7ffd96b329bc items=0 ppid=1633 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:18:15.754000 audit[1735]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1735 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.754000 audit[1735]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd41d79640 a2=0 a3=7ffd41d7962c items=0 ppid=1633 pid=1735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.754000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:18:15.757000 audit[1737]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1737 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.757000 audit[1737]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7da39c90 a2=0 a3=7ffe7da39c7c items=0 ppid=1633 pid=1737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:18:15.758000 audit[1738]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.758000 audit[1738]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4a3d4b60 a2=0 a3=7ffc4a3d4b4c items=0 ppid=1633 pid=1738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:18:15.760000 audit[1740]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.760000 audit[1740]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe42087250 a2=0 a3=7ffe4208723c items=0 ppid=1633 pid=1740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:18:15.763000 audit[1743]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1743 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.763000 audit[1743]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffd882c2a0 a2=0 a3=7fffd882c28c items=0 ppid=1633 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:18:15.764000 audit[1744]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1744 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.764000 audit[1744]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff53e78550 a2=0 a3=7fff53e7853c items=0 ppid=1633 pid=1744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.764000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:18:15.766000 audit[1746]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1746 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.766000 audit[1746]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc0dee3ba0 a2=0 a3=7ffc0dee3b8c items=0 ppid=1633 pid=1746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:18:15.767000 audit[1747]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1747 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.767000 audit[1747]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdee065380 a2=0 a3=7ffdee06536c items=0 ppid=1633 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.767000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:18:15.769000 audit[1749]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1749 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.769000 audit[1749]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe727338d0 a2=0 a3=7ffe727338bc items=0 ppid=1633 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:18:15.772000 audit[1752]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1752 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.772000 audit[1752]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffeb8571e0 a2=0 a3=7fffeb8571cc items=0 ppid=1633 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:18:15.776000 audit[1755]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.776000 audit[1755]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce7c75aa0 a2=0 a3=7ffce7c75a8c items=0 ppid=1633 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:18:15.777000 audit[1756]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1756 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.777000 audit[1756]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffedc722e60 a2=0 a3=7ffedc722e4c items=0 ppid=1633 pid=1756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:18:15.779000 audit[1758]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1758 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.779000 audit[1758]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe7ee718a0 a2=0 a3=7ffe7ee7188c items=0 ppid=1633 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:18:15.783000 audit[1761]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.783000 audit[1761]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd2f92d190 a2=0 a3=7ffd2f92d17c items=0 ppid=1633 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:18:15.784000 audit[1762]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1762 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.784000 audit[1762]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc64a01a0 a2=0 a3=7ffdc64a018c items=0 ppid=1633 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:18:15.786000 audit[1764]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_rule pid=1764 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.786000 audit[1764]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe7e7a9070 a2=0 a3=7ffe7e7a905c items=0 ppid=1633 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:18:15.787797 kubelet[1411]: E1002 19:18:15.787664 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:15.789000 audit[1767]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_rule pid=1767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.789000 audit[1767]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd4532aef0 a2=0 a3=7ffd4532aedc items=0 ppid=1633 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.789000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:18:15.790000 audit[1768]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=1768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.790000 audit[1768]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd36f728f0 a2=0 a3=7ffd36f728dc items=0 ppid=1633 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:18:15.792000 audit[1770]: NETFILTER_CFG table=nat:62 family=10 entries=2 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:18:15.792000 audit[1770]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff1b7deb40 a2=0 a3=7fff1b7deb2c items=0 ppid=1633 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:18:15.795000 audit[1772]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:18:15.795000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcb480afe0 a2=0 a3=7ffcb480afcc items=0 ppid=1633 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.795000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:18:15.795000 audit[1772]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:18:15.795000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffcb480afe0 a2=0 a3=7ffcb480afcc items=0 ppid=1633 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:15.795000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:18:15.802905 kubelet[1411]: E1002 19:18:15.802858 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:15.958322 kubelet[1411]: I1002 19:18:15.956048 1411 scope.go:115] "RemoveContainer" containerID="b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568" Oct 2 19:18:15.958322 kubelet[1411]: I1002 19:18:15.956326 1411 scope.go:115] "RemoveContainer" containerID="b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568" Oct 2 19:18:15.959530 env[1105]: time="2023-10-02T19:18:15.959479233Z" level=info msg="RemoveContainer for \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\"" Oct 2 19:18:15.959713 env[1105]: time="2023-10-02T19:18:15.959482750Z" level=info msg="RemoveContainer for \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\"" Oct 2 19:18:15.959713 env[1105]: time="2023-10-02T19:18:15.959671093Z" level=error msg="RemoveContainer for \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\" failed" error="failed to set removing state for container \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\": container is already in removing state" Oct 2 19:18:15.959955 kubelet[1411]: E1002 19:18:15.959934 1411 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\": container is already in removing state" containerID="b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568" Oct 2 19:18:15.960145 kubelet[1411]: E1002 19:18:15.960108 1411 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568": container is already in removing state; Skipping pod "cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)" Oct 2 19:18:15.960358 kubelet[1411]: E1002 19:18:15.960200 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:15.960441 kubelet[1411]: E1002 19:18:15.960415 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:15.960506 kubelet[1411]: E1002 19:18:15.960428 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:18:16.118921 env[1105]: time="2023-10-02T19:18:16.118844567Z" level=info msg="RemoveContainer for \"b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568\" returns successfully" Oct 2 19:18:16.357878 kubelet[1411]: I1002 19:18:16.357745 1411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8phxk" podStartSLOduration=5.419082789 podCreationTimestamp="2023-10-02 19:17:57 +0000 UTC" firstStartedPulling="2023-10-02 19:18:01.472588497 +0000 UTC m=+6.048538140" lastFinishedPulling="2023-10-02 19:18:15.411200309 +0000 UTC m=+19.987149952" observedRunningTime="2023-10-02 19:18:16.357559628 +0000 UTC m=+20.933509281" watchObservedRunningTime="2023-10-02 19:18:16.357694601 +0000 UTC m=+20.933644274" Oct 2 19:18:16.804551 kubelet[1411]: E1002 19:18:16.804373 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:16.963462 kubelet[1411]: E1002 19:18:16.963429 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:16.963462 kubelet[1411]: E1002 19:18:16.963447 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:16.963685 kubelet[1411]: E1002 19:18:16.963654 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:18:17.102296 kubelet[1411]: W1002 19:18:17.102084 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0773655e_b50d_44c3_8cac_6a7eedfd7601.slice/cri-containerd-b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568.scope WatchSource:0}: container "b556ec7da0890fcd1cdcc465dae332dff7e6e9195dec4a393929259651693568" in namespace "k8s.io": not found Oct 2 19:18:17.804863 kubelet[1411]: E1002 19:18:17.804799 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:18.805159 kubelet[1411]: E1002 19:18:18.805068 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:19.805825 kubelet[1411]: E1002 19:18:19.805737 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:20.210776 kubelet[1411]: W1002 19:18:20.210629 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0773655e_b50d_44c3_8cac_6a7eedfd7601.slice/cri-containerd-c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f.scope WatchSource:0}: task c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f not found: not found Oct 2 19:18:20.806422 kubelet[1411]: E1002 19:18:20.806270 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:21.806855 kubelet[1411]: E1002 19:18:21.806737 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:22.807337 kubelet[1411]: E1002 19:18:22.807263 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:23.808505 kubelet[1411]: E1002 19:18:23.808459 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:24.808636 kubelet[1411]: E1002 19:18:24.808576 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:25.809058 kubelet[1411]: E1002 19:18:25.808999 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:26.809577 kubelet[1411]: E1002 19:18:26.809511 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:27.810570 kubelet[1411]: E1002 19:18:27.810493 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:27.915906 kubelet[1411]: E1002 19:18:27.915851 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:27.918543 env[1105]: time="2023-10-02T19:18:27.918465040Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:18:28.063429 env[1105]: time="2023-10-02T19:18:28.063265456Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\"" Oct 2 19:18:28.063957 env[1105]: time="2023-10-02T19:18:28.063927643Z" level=info msg="StartContainer for \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\"" Oct 2 19:18:28.084784 systemd[1]: Started cri-containerd-528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840.scope. Oct 2 19:18:28.101017 systemd[1]: cri-containerd-528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840.scope: Deactivated successfully. Oct 2 19:18:28.105030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840-rootfs.mount: Deactivated successfully. Oct 2 19:18:28.167416 env[1105]: time="2023-10-02T19:18:28.167348993Z" level=info msg="shim disconnected" id=528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840 Oct 2 19:18:28.167727 env[1105]: time="2023-10-02T19:18:28.167426542Z" level=warning msg="cleaning up after shim disconnected" id=528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840 namespace=k8s.io Oct 2 19:18:28.167727 env[1105]: time="2023-10-02T19:18:28.167443944Z" level=info msg="cleaning up dead shim" Oct 2 19:18:28.181868 env[1105]: time="2023-10-02T19:18:28.181806924Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1797 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:28.182128 env[1105]: time="2023-10-02T19:18:28.182069677Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:18:28.182322 env[1105]: time="2023-10-02T19:18:28.182275521Z" level=error msg="Failed to pipe stdout of container \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\"" error="reading from a closed fifo" Oct 2 19:18:28.182547 env[1105]: time="2023-10-02T19:18:28.182499790Z" level=error msg="Failed to pipe stderr of container \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\"" error="reading from a closed fifo" Oct 2 19:18:28.185010 env[1105]: time="2023-10-02T19:18:28.184963075Z" level=error msg="StartContainer for \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:28.185292 kubelet[1411]: E1002 19:18:28.185256 1411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840" Oct 2 19:18:28.185501 kubelet[1411]: E1002 19:18:28.185477 1411 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:28.185501 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:28.185501 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:18:28.185501 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7hlzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:28.185737 kubelet[1411]: E1002 19:18:28.185534 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:18:28.810775 kubelet[1411]: E1002 19:18:28.810681 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:28.985249 kubelet[1411]: I1002 19:18:28.985199 1411 scope.go:115] "RemoveContainer" containerID="c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f" Oct 2 19:18:28.985589 kubelet[1411]: I1002 19:18:28.985565 1411 scope.go:115] "RemoveContainer" containerID="c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f" Oct 2 19:18:28.986193 env[1105]: time="2023-10-02T19:18:28.986153486Z" level=info msg="RemoveContainer for \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\"" Oct 2 19:18:28.986670 env[1105]: time="2023-10-02T19:18:28.986631630Z" level=info msg="RemoveContainer for \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\"" Oct 2 19:18:28.986778 env[1105]: time="2023-10-02T19:18:28.986748253Z" level=error msg="RemoveContainer for \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\" failed" error="failed to set removing state for container \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\": container is already in removing state" Oct 2 19:18:28.986923 kubelet[1411]: E1002 19:18:28.986907 1411 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\": container is already in removing state" containerID="c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f" Oct 2 19:18:28.986993 kubelet[1411]: E1002 19:18:28.986942 1411 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f": container is already in removing state; Skipping pod "cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)" Oct 2 19:18:28.987036 kubelet[1411]: E1002 19:18:28.987016 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:28.987230 kubelet[1411]: E1002 19:18:28.987204 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:18:28.989087 env[1105]: time="2023-10-02T19:18:28.989051933Z" level=info msg="RemoveContainer for \"c464910cbd948de910697f304e30df4879e3f3283d2547cc4be7eeab6cd0736f\" returns successfully" Oct 2 19:18:29.811088 kubelet[1411]: E1002 19:18:29.810977 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:30.811626 kubelet[1411]: E1002 19:18:30.811581 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:31.271742 kubelet[1411]: W1002 19:18:31.271706 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0773655e_b50d_44c3_8cac_6a7eedfd7601.slice/cri-containerd-528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840.scope WatchSource:0}: task 528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840 not found: not found Oct 2 19:18:31.812457 kubelet[1411]: E1002 19:18:31.812392 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:32.813277 kubelet[1411]: E1002 19:18:32.813143 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:33.814344 kubelet[1411]: E1002 19:18:33.814273 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:34.814956 kubelet[1411]: E1002 19:18:34.814889 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:35.787654 kubelet[1411]: E1002 19:18:35.787541 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:35.816124 kubelet[1411]: E1002 19:18:35.816051 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:36.816621 kubelet[1411]: E1002 19:18:36.816542 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:37.489209 update_engine[1091]: I1002 19:18:37.489073 1091 update_attempter.cc:505] Updating boot flags... Oct 2 19:18:37.817656 kubelet[1411]: E1002 19:18:37.817491 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:38.818467 kubelet[1411]: E1002 19:18:38.818395 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:39.819087 kubelet[1411]: E1002 19:18:39.818971 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.819931 kubelet[1411]: E1002 19:18:40.819868 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.916345 kubelet[1411]: E1002 19:18:40.916303 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:40.916548 kubelet[1411]: E1002 19:18:40.916515 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:18:41.821072 kubelet[1411]: E1002 19:18:41.820980 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:42.821189 kubelet[1411]: E1002 19:18:42.821131 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:43.822143 kubelet[1411]: E1002 19:18:43.822073 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:44.823310 kubelet[1411]: E1002 19:18:44.823249 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:45.824257 kubelet[1411]: E1002 19:18:45.824186 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:46.825361 kubelet[1411]: E1002 19:18:46.825292 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:47.825999 kubelet[1411]: E1002 19:18:47.825930 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:48.826426 kubelet[1411]: E1002 19:18:48.826353 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:49.827500 kubelet[1411]: E1002 19:18:49.827434 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:50.827903 kubelet[1411]: E1002 19:18:50.827803 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:51.828668 kubelet[1411]: E1002 19:18:51.828605 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:52.829335 kubelet[1411]: E1002 19:18:52.829243 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.830171 kubelet[1411]: E1002 19:18:53.830108 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.916587 kubelet[1411]: E1002 19:18:53.916533 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:53.918650 env[1105]: time="2023-10-02T19:18:53.918609123Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:18:53.934733 env[1105]: time="2023-10-02T19:18:53.934663408Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\"" Oct 2 19:18:53.935349 env[1105]: time="2023-10-02T19:18:53.935304836Z" level=info msg="StartContainer for \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\"" Oct 2 19:18:53.953303 systemd[1]: Started cri-containerd-50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec.scope. Oct 2 19:18:53.961631 systemd[1]: cri-containerd-50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec.scope: Deactivated successfully. Oct 2 19:18:53.961909 systemd[1]: Stopped cri-containerd-50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec.scope. Oct 2 19:18:53.964974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec-rootfs.mount: Deactivated successfully. Oct 2 19:18:53.971692 env[1105]: time="2023-10-02T19:18:53.971624638Z" level=info msg="shim disconnected" id=50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec Oct 2 19:18:53.971692 env[1105]: time="2023-10-02T19:18:53.971680133Z" level=warning msg="cleaning up after shim disconnected" id=50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec namespace=k8s.io Oct 2 19:18:53.971692 env[1105]: time="2023-10-02T19:18:53.971689330Z" level=info msg="cleaning up dead shim" Oct 2 19:18:53.979153 env[1105]: time="2023-10-02T19:18:53.979078967Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1851 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:53Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:53.979510 env[1105]: time="2023-10-02T19:18:53.979435519Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:18:53.984369 env[1105]: time="2023-10-02T19:18:53.984302976Z" level=error msg="Failed to pipe stdout of container \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\"" error="reading from a closed fifo" Oct 2 19:18:53.984545 env[1105]: time="2023-10-02T19:18:53.984408524Z" level=error msg="Failed to pipe stderr of container \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\"" error="reading from a closed fifo" Oct 2 19:18:53.986949 env[1105]: time="2023-10-02T19:18:53.986869288Z" level=error msg="StartContainer for \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:53.987183 kubelet[1411]: E1002 19:18:53.987163 1411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec" Oct 2 19:18:53.987337 kubelet[1411]: E1002 19:18:53.987310 1411 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:53.987337 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:53.987337 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:18:53.987337 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7hlzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:53.987549 kubelet[1411]: E1002 19:18:53.987356 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:18:54.029310 kubelet[1411]: I1002 19:18:54.029265 1411 scope.go:115] "RemoveContainer" containerID="528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840" Oct 2 19:18:54.029604 kubelet[1411]: I1002 19:18:54.029578 1411 scope.go:115] "RemoveContainer" containerID="528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840" Oct 2 19:18:54.030731 env[1105]: time="2023-10-02T19:18:54.030675556Z" level=info msg="RemoveContainer for \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\"" Oct 2 19:18:54.030940 env[1105]: time="2023-10-02T19:18:54.030672670Z" level=info msg="RemoveContainer for \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\"" Oct 2 19:18:54.031035 env[1105]: time="2023-10-02T19:18:54.030990388Z" level=error msg="RemoveContainer for \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\" failed" error="failed to set removing state for container \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\": container is already in removing state" Oct 2 19:18:54.031250 kubelet[1411]: E1002 19:18:54.031201 1411 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\": container is already in removing state" containerID="528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840" Oct 2 19:18:54.031966 kubelet[1411]: I1002 19:18:54.031260 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840} err="rpc error: code = Unknown desc = failed to set removing state for container \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\": container is already in removing state" Oct 2 19:18:54.036006 env[1105]: time="2023-10-02T19:18:54.035953564Z" level=info msg="RemoveContainer for \"528e77b4ff383410d1436d20f78305b311114b7e04386fc09c573afa72c15840\" returns successfully" Oct 2 19:18:54.036250 kubelet[1411]: E1002 19:18:54.036228 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:18:54.036484 kubelet[1411]: E1002 19:18:54.036463 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:18:54.830815 kubelet[1411]: E1002 19:18:54.830759 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:55.787263 kubelet[1411]: E1002 19:18:55.787189 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:55.831278 kubelet[1411]: E1002 19:18:55.831253 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:56.831913 kubelet[1411]: E1002 19:18:56.831876 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:57.076931 kubelet[1411]: W1002 19:18:57.076878 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0773655e_b50d_44c3_8cac_6a7eedfd7601.slice/cri-containerd-50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec.scope WatchSource:0}: task 50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec not found: not found Oct 2 19:18:57.832972 kubelet[1411]: E1002 19:18:57.832915 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:58.833480 kubelet[1411]: E1002 19:18:58.833409 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:59.834309 kubelet[1411]: E1002 19:18:59.834231 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:00.835236 kubelet[1411]: E1002 19:19:00.835014 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:01.835225 kubelet[1411]: E1002 19:19:01.835168 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:02.836251 kubelet[1411]: E1002 19:19:02.836186 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:03.836551 kubelet[1411]: E1002 19:19:03.836477 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:04.836997 kubelet[1411]: E1002 19:19:04.836910 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:04.916119 kubelet[1411]: E1002 19:19:04.916065 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:04.916376 kubelet[1411]: E1002 19:19:04.916361 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:19:05.837875 kubelet[1411]: E1002 19:19:05.837803 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:06.838930 kubelet[1411]: E1002 19:19:06.838860 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:07.839780 kubelet[1411]: E1002 19:19:07.839700 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:08.840272 kubelet[1411]: E1002 19:19:08.840173 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:09.841288 kubelet[1411]: E1002 19:19:09.841241 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:10.842407 kubelet[1411]: E1002 19:19:10.842332 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:11.843121 kubelet[1411]: E1002 19:19:11.843032 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:12.843998 kubelet[1411]: E1002 19:19:12.843949 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:13.844268 kubelet[1411]: E1002 19:19:13.844178 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:14.845304 kubelet[1411]: E1002 19:19:14.845193 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:15.787822 kubelet[1411]: E1002 19:19:15.787771 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:15.846349 kubelet[1411]: E1002 19:19:15.846271 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:16.847432 kubelet[1411]: E1002 19:19:16.847375 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:16.915541 kubelet[1411]: E1002 19:19:16.915494 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:16.915747 kubelet[1411]: E1002 19:19:16.915729 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:19:17.848382 kubelet[1411]: E1002 19:19:17.848307 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:18.848661 kubelet[1411]: E1002 19:19:18.848603 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:19.848941 kubelet[1411]: E1002 19:19:19.848741 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:20.849917 kubelet[1411]: E1002 19:19:20.849847 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:21.851029 kubelet[1411]: E1002 19:19:21.850952 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:22.851583 kubelet[1411]: E1002 19:19:22.851521 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:23.852537 kubelet[1411]: E1002 19:19:23.852484 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:23.916240 kubelet[1411]: E1002 19:19:23.916113 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:24.853465 kubelet[1411]: E1002 19:19:24.853406 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:25.853928 kubelet[1411]: E1002 19:19:25.853885 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:26.854425 kubelet[1411]: E1002 19:19:26.854356 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:27.854816 kubelet[1411]: E1002 19:19:27.854736 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:28.855348 kubelet[1411]: E1002 19:19:28.855277 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:29.855970 kubelet[1411]: E1002 19:19:29.855895 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:30.856400 kubelet[1411]: E1002 19:19:30.856351 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:31.857096 kubelet[1411]: E1002 19:19:31.857047 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:31.916301 kubelet[1411]: E1002 19:19:31.916246 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:31.916500 kubelet[1411]: E1002 19:19:31.916469 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:19:32.857952 kubelet[1411]: E1002 19:19:32.857901 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:33.858359 kubelet[1411]: E1002 19:19:33.858279 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:34.858953 kubelet[1411]: E1002 19:19:34.858889 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:35.788039 kubelet[1411]: E1002 19:19:35.787961 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:35.859645 kubelet[1411]: E1002 19:19:35.859586 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:36.860516 kubelet[1411]: E1002 19:19:36.860441 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:37.861635 kubelet[1411]: E1002 19:19:37.861562 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:38.862006 kubelet[1411]: E1002 19:19:38.861970 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:39.862150 kubelet[1411]: E1002 19:19:39.862092 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:40.862556 kubelet[1411]: E1002 19:19:40.862485 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:41.862727 kubelet[1411]: E1002 19:19:41.862665 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:42.863763 kubelet[1411]: E1002 19:19:42.863696 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:43.864559 kubelet[1411]: E1002 19:19:43.864496 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:44.865066 kubelet[1411]: E1002 19:19:44.865018 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:45.866054 kubelet[1411]: E1002 19:19:45.865998 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:46.866767 kubelet[1411]: E1002 19:19:46.866695 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:46.915887 kubelet[1411]: E1002 19:19:46.915825 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:46.917801 env[1105]: time="2023-10-02T19:19:46.917755750Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:19:46.929011 env[1105]: time="2023-10-02T19:19:46.928949959Z" level=info msg="CreateContainer within sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452\"" Oct 2 19:19:46.929520 env[1105]: time="2023-10-02T19:19:46.929485983Z" level=info msg="StartContainer for \"f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452\"" Oct 2 19:19:46.946963 systemd[1]: Started cri-containerd-f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452.scope. Oct 2 19:19:46.957424 systemd[1]: cri-containerd-f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452.scope: Deactivated successfully. Oct 2 19:19:46.957644 systemd[1]: Stopped cri-containerd-f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452.scope. Oct 2 19:19:46.960623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452-rootfs.mount: Deactivated successfully. Oct 2 19:19:46.965985 env[1105]: time="2023-10-02T19:19:46.965912989Z" level=info msg="shim disconnected" id=f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452 Oct 2 19:19:46.966173 env[1105]: time="2023-10-02T19:19:46.965987832Z" level=warning msg="cleaning up after shim disconnected" id=f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452 namespace=k8s.io Oct 2 19:19:46.966173 env[1105]: time="2023-10-02T19:19:46.966003162Z" level=info msg="cleaning up dead shim" Oct 2 19:19:46.972499 env[1105]: time="2023-10-02T19:19:46.972462409Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1892 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:46.972725 env[1105]: time="2023-10-02T19:19:46.972672882Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:19:46.972919 env[1105]: time="2023-10-02T19:19:46.972851823Z" level=error msg="Failed to pipe stdout of container \"f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452\"" error="reading from a closed fifo" Oct 2 19:19:46.973373 env[1105]: time="2023-10-02T19:19:46.973304789Z" level=error msg="Failed to pipe stderr of container \"f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452\"" error="reading from a closed fifo" Oct 2 19:19:46.975536 env[1105]: time="2023-10-02T19:19:46.975495677Z" level=error msg="StartContainer for \"f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:46.975789 kubelet[1411]: E1002 19:19:46.975753 1411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452" Oct 2 19:19:46.975936 kubelet[1411]: E1002 19:19:46.975913 1411 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:46.975936 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:46.975936 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:19:46.975936 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7hlzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:46.976112 kubelet[1411]: E1002 19:19:46.975977 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:19:47.117598 kubelet[1411]: I1002 19:19:47.116830 1411 scope.go:115] "RemoveContainer" containerID="50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec" Oct 2 19:19:47.117598 kubelet[1411]: I1002 19:19:47.117188 1411 scope.go:115] "RemoveContainer" containerID="50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec" Oct 2 19:19:47.118284 env[1105]: time="2023-10-02T19:19:47.118239685Z" level=info msg="RemoveContainer for \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\"" Oct 2 19:19:47.118284 env[1105]: time="2023-10-02T19:19:47.118242281Z" level=info msg="RemoveContainer for \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\"" Oct 2 19:19:47.118487 env[1105]: time="2023-10-02T19:19:47.118417335Z" level=error msg="RemoveContainer for \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\" failed" error="failed to set removing state for container \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\": container is already in removing state" Oct 2 19:19:47.118673 kubelet[1411]: E1002 19:19:47.118648 1411 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\": container is already in removing state" containerID="50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec" Oct 2 19:19:47.118783 kubelet[1411]: I1002 19:19:47.118697 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec} err="rpc error: code = Unknown desc = failed to set removing state for container \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\": container is already in removing state" Oct 2 19:19:47.121157 env[1105]: time="2023-10-02T19:19:47.121128686Z" level=info msg="RemoveContainer for \"50b7304f2aead0f72c89f04fc940ebcd23f28c31022fd711354bd89d86a102ec\" returns successfully" Oct 2 19:19:47.121375 kubelet[1411]: E1002 19:19:47.121346 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:19:47.121557 kubelet[1411]: E1002 19:19:47.121543 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:19:47.867332 kubelet[1411]: E1002 19:19:47.867269 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:48.867840 kubelet[1411]: E1002 19:19:48.867767 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:49.868903 kubelet[1411]: E1002 19:19:49.868833 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:50.071705 kubelet[1411]: W1002 19:19:50.071645 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0773655e_b50d_44c3_8cac_6a7eedfd7601.slice/cri-containerd-f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452.scope WatchSource:0}: task f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452 not found: not found Oct 2 19:19:50.869856 kubelet[1411]: E1002 19:19:50.869779 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:51.870594 kubelet[1411]: E1002 19:19:51.870542 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:52.870736 kubelet[1411]: E1002 19:19:52.870674 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:53.871312 kubelet[1411]: E1002 19:19:53.871258 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:54.872428 kubelet[1411]: E1002 19:19:54.872366 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:55.788120 kubelet[1411]: E1002 19:19:55.788057 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:55.841913 kubelet[1411]: E1002 19:19:55.841847 1411 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:19:55.873206 kubelet[1411]: E1002 19:19:55.873128 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:55.959485 kubelet[1411]: E1002 19:19:55.959419 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:56.873966 kubelet[1411]: E1002 19:19:56.873889 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:57.875065 kubelet[1411]: E1002 19:19:57.874961 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:58.875182 kubelet[1411]: E1002 19:19:58.875107 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:59.876072 kubelet[1411]: E1002 19:19:59.876007 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.876765 kubelet[1411]: E1002 19:20:00.876680 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.916397 kubelet[1411]: E1002 19:20:00.916344 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:00.916641 kubelet[1411]: E1002 19:20:00.916610 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:20:00.960658 kubelet[1411]: E1002 19:20:00.960626 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:01.877328 kubelet[1411]: E1002 19:20:01.877258 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:02.877723 kubelet[1411]: E1002 19:20:02.877648 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:03.878813 kubelet[1411]: E1002 19:20:03.878735 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:04.879244 kubelet[1411]: E1002 19:20:04.879122 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:05.879391 kubelet[1411]: E1002 19:20:05.879313 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:05.961412 kubelet[1411]: E1002 19:20:05.961362 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:06.879895 kubelet[1411]: E1002 19:20:06.879807 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:07.880807 kubelet[1411]: E1002 19:20:07.880735 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:08.881317 kubelet[1411]: E1002 19:20:08.881238 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:09.881727 kubelet[1411]: E1002 19:20:09.881649 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:10.882379 kubelet[1411]: E1002 19:20:10.882297 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:10.962638 kubelet[1411]: E1002 19:20:10.962593 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:11.883313 kubelet[1411]: E1002 19:20:11.883234 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:12.884471 kubelet[1411]: E1002 19:20:12.884397 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:13.885593 kubelet[1411]: E1002 19:20:13.885526 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:13.916620 kubelet[1411]: E1002 19:20:13.916580 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:13.917500 kubelet[1411]: E1002 19:20:13.917449 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:20:14.886019 kubelet[1411]: E1002 19:20:14.885937 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:15.788092 kubelet[1411]: E1002 19:20:15.788015 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:15.886591 kubelet[1411]: E1002 19:20:15.886521 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:15.963036 kubelet[1411]: E1002 19:20:15.962994 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:16.887159 kubelet[1411]: E1002 19:20:16.887077 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:17.887700 kubelet[1411]: E1002 19:20:17.887619 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:18.888712 kubelet[1411]: E1002 19:20:18.888634 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:19.889354 kubelet[1411]: E1002 19:20:19.889270 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:20.890233 kubelet[1411]: E1002 19:20:20.890170 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:20.964426 kubelet[1411]: E1002 19:20:20.964313 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:21.890524 kubelet[1411]: E1002 19:20:21.890462 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:22.891578 kubelet[1411]: E1002 19:20:22.891509 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:23.892243 kubelet[1411]: E1002 19:20:23.892139 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:24.893113 kubelet[1411]: E1002 19:20:24.893037 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:24.915762 kubelet[1411]: E1002 19:20:24.915723 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:24.915996 kubelet[1411]: E1002 19:20:24.915972 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:20:25.893837 kubelet[1411]: E1002 19:20:25.893774 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:25.964710 kubelet[1411]: E1002 19:20:25.964683 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:26.894004 kubelet[1411]: E1002 19:20:26.893906 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:27.894988 kubelet[1411]: E1002 19:20:27.894906 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:28.895676 kubelet[1411]: E1002 19:20:28.895596 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:29.896708 kubelet[1411]: E1002 19:20:29.896639 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:30.897608 kubelet[1411]: E1002 19:20:30.897513 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:30.965993 kubelet[1411]: E1002 19:20:30.965939 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:31.898308 kubelet[1411]: E1002 19:20:31.898237 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:32.898521 kubelet[1411]: E1002 19:20:32.898438 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:33.898717 kubelet[1411]: E1002 19:20:33.898641 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:34.899481 kubelet[1411]: E1002 19:20:34.899410 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:35.787408 kubelet[1411]: E1002 19:20:35.787348 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:35.900069 kubelet[1411]: E1002 19:20:35.899995 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:35.966468 kubelet[1411]: E1002 19:20:35.966421 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:36.901012 kubelet[1411]: E1002 19:20:36.900940 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:37.901616 kubelet[1411]: E1002 19:20:37.901541 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:37.916379 kubelet[1411]: E1002 19:20:37.916331 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:37.916624 kubelet[1411]: E1002 19:20:37.916604 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:20:38.902278 kubelet[1411]: E1002 19:20:38.902197 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:39.903119 kubelet[1411]: E1002 19:20:39.903048 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:40.904248 kubelet[1411]: E1002 19:20:40.904160 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:40.967562 kubelet[1411]: E1002 19:20:40.967524 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:41.905006 kubelet[1411]: E1002 19:20:41.904924 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:42.906153 kubelet[1411]: E1002 19:20:42.906092 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:42.915752 kubelet[1411]: E1002 19:20:42.915680 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:43.906707 kubelet[1411]: E1002 19:20:43.906635 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:44.907741 kubelet[1411]: E1002 19:20:44.907664 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:45.908161 kubelet[1411]: E1002 19:20:45.908040 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:45.968379 kubelet[1411]: E1002 19:20:45.968312 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:46.909253 kubelet[1411]: E1002 19:20:46.909171 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:47.909593 kubelet[1411]: E1002 19:20:47.909511 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:48.910054 kubelet[1411]: E1002 19:20:48.909985 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:49.910817 kubelet[1411]: E1002 19:20:49.910656 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:50.911752 kubelet[1411]: E1002 19:20:50.911693 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:50.969649 kubelet[1411]: E1002 19:20:50.969580 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:51.912308 kubelet[1411]: E1002 19:20:51.912249 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:51.915832 kubelet[1411]: E1002 19:20:51.915801 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:20:51.916066 kubelet[1411]: E1002 19:20:51.916049 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:20:52.912886 kubelet[1411]: E1002 19:20:52.912810 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:53.913885 kubelet[1411]: E1002 19:20:53.913807 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:54.914444 kubelet[1411]: E1002 19:20:54.914360 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:55.787809 kubelet[1411]: E1002 19:20:55.787742 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:55.914614 kubelet[1411]: E1002 19:20:55.914543 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:55.970018 kubelet[1411]: E1002 19:20:55.969985 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:56.914924 kubelet[1411]: E1002 19:20:56.914864 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:57.915783 kubelet[1411]: E1002 19:20:57.915693 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:58.916448 kubelet[1411]: E1002 19:20:58.916364 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:59.916634 kubelet[1411]: E1002 19:20:59.916595 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:00.917562 kubelet[1411]: E1002 19:21:00.917509 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:00.971292 kubelet[1411]: E1002 19:21:00.971245 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:01.918171 kubelet[1411]: E1002 19:21:01.918129 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:02.919183 kubelet[1411]: E1002 19:21:02.919110 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:03.915849 kubelet[1411]: E1002 19:21:03.915787 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:03.916072 kubelet[1411]: E1002 19:21:03.916050 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w5dwm_kube-system(0773655e-b50d-44c3-8cac-6a7eedfd7601)\"" pod="kube-system/cilium-w5dwm" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 Oct 2 19:21:03.919774 kubelet[1411]: E1002 19:21:03.919750 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:04.097419 kubelet[1411]: E1002 19:21:04.097357 1411 configmap.go:199] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Oct 2 19:21:04.097640 kubelet[1411]: E1002 19:21:04.097483 1411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-config-path podName:0773655e-b50d-44c3-8cac-6a7eedfd7601 nodeName:}" failed. No retries permitted until 2023-10-02 19:21:04.597456757 +0000 UTC m=+189.173406400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-config-path") pod "cilium-w5dwm" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601") : configmap "cilium-config" not found Oct 2 19:21:04.246661 env[1105]: time="2023-10-02T19:21:04.246605607Z" level=info msg="StopPodSandbox for \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\"" Oct 2 19:21:04.247067 env[1105]: time="2023-10-02T19:21:04.246694525Z" level=info msg="Container to stop \"f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:21:04.248508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620-shm.mount: Deactivated successfully. Oct 2 19:21:04.254377 systemd[1]: cri-containerd-20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620.scope: Deactivated successfully. Oct 2 19:21:04.253000 audit: BPF prog-id=64 op=UNLOAD Oct 2 19:21:04.255353 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:21:04.255451 kernel: audit: type=1334 audit(1696274464.253:644): prog-id=64 op=UNLOAD Oct 2 19:21:04.257000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:21:04.260237 kernel: audit: type=1334 audit(1696274464.257:645): prog-id=67 op=UNLOAD Oct 2 19:21:04.272539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620-rootfs.mount: Deactivated successfully. Oct 2 19:21:04.278943 env[1105]: time="2023-10-02T19:21:04.278876758Z" level=info msg="shim disconnected" id=20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620 Oct 2 19:21:04.279182 env[1105]: time="2023-10-02T19:21:04.278959815Z" level=warning msg="cleaning up after shim disconnected" id=20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620 namespace=k8s.io Oct 2 19:21:04.279182 env[1105]: time="2023-10-02T19:21:04.278978951Z" level=info msg="cleaning up dead shim" Oct 2 19:21:04.285777 env[1105]: time="2023-10-02T19:21:04.285731247Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:21:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1930 runtime=io.containerd.runc.v2\n" Oct 2 19:21:04.286274 env[1105]: time="2023-10-02T19:21:04.286204879Z" level=info msg="TearDown network for sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" successfully" Oct 2 19:21:04.286274 env[1105]: time="2023-10-02T19:21:04.286267428Z" level=info msg="StopPodSandbox for \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" returns successfully" Oct 2 19:21:04.399441 kubelet[1411]: I1002 19:21:04.399385 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0773655e-b50d-44c3-8cac-6a7eedfd7601-hubble-tls\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399441 kubelet[1411]: I1002 19:21:04.399435 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cni-path\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399441 kubelet[1411]: I1002 19:21:04.399456 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-host-proc-sys-kernel\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399489 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-config-path\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399508 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-bpf-maps\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399523 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-hostproc\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399521 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cni-path" (OuterVolumeSpecName: "cni-path") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399541 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0773655e-b50d-44c3-8cac-6a7eedfd7601-clustermesh-secrets\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399614 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-cgroup\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399640 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-lib-modules\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399656 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-run\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399682 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hlzp\" (UniqueName: \"kubernetes.io/projected/0773655e-b50d-44c3-8cac-6a7eedfd7601-kube-api-access-7hlzp\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399699 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-xtables-lock\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399724 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-etc-cni-netd\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399741 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-host-proc-sys-net\") pod \"0773655e-b50d-44c3-8cac-6a7eedfd7601\" (UID: \"0773655e-b50d-44c3-8cac-6a7eedfd7601\") " Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399769 1411 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cni-path\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399784 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399782 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.399808 kubelet[1411]: I1002 19:21:04.399805 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.400261 kubelet[1411]: I1002 19:21:04.399818 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.400261 kubelet[1411]: I1002 19:21:04.399832 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.400261 kubelet[1411]: W1002 19:21:04.399874 1411 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0773655e-b50d-44c3-8cac-6a7eedfd7601/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:21:04.401617 kubelet[1411]: I1002 19:21:04.400384 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.401617 kubelet[1411]: I1002 19:21:04.400424 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.401617 kubelet[1411]: I1002 19:21:04.400439 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-hostproc" (OuterVolumeSpecName: "hostproc") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.401617 kubelet[1411]: I1002 19:21:04.400453 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:21:04.401775 kubelet[1411]: I1002 19:21:04.401619 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:21:04.402407 kubelet[1411]: I1002 19:21:04.402357 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0773655e-b50d-44c3-8cac-6a7eedfd7601-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:21:04.402741 kubelet[1411]: I1002 19:21:04.402693 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0773655e-b50d-44c3-8cac-6a7eedfd7601-kube-api-access-7hlzp" (OuterVolumeSpecName: "kube-api-access-7hlzp") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "kube-api-access-7hlzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:21:04.403244 systemd[1]: var-lib-kubelet-pods-0773655e\x2db50d\x2d44c3\x2d8cac\x2d6a7eedfd7601-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:21:04.404657 systemd[1]: var-lib-kubelet-pods-0773655e\x2db50d\x2d44c3\x2d8cac\x2d6a7eedfd7601-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7hlzp.mount: Deactivated successfully. Oct 2 19:21:04.404757 systemd[1]: var-lib-kubelet-pods-0773655e\x2db50d\x2d44c3\x2d8cac\x2d6a7eedfd7601-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:21:04.405239 kubelet[1411]: I1002 19:21:04.405202 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0773655e-b50d-44c3-8cac-6a7eedfd7601-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0773655e-b50d-44c3-8cac-6a7eedfd7601" (UID: "0773655e-b50d-44c3-8cac-6a7eedfd7601"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500437 1411 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-xtables-lock\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500484 1411 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-etc-cni-netd\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500495 1411 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-host-proc-sys-net\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500506 1411 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7hlzp\" (UniqueName: \"kubernetes.io/projected/0773655e-b50d-44c3-8cac-6a7eedfd7601-kube-api-access-7hlzp\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500515 1411 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0773655e-b50d-44c3-8cac-6a7eedfd7601-hubble-tls\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500523 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-config-path\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500531 1411 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-bpf-maps\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500539 1411 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-hostproc\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500549 1411 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-host-proc-sys-kernel\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500557 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-cgroup\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500565 1411 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-lib-modules\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500576 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0773655e-b50d-44c3-8cac-6a7eedfd7601-cilium-run\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.500584 kubelet[1411]: I1002 19:21:04.500588 1411 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0773655e-b50d-44c3-8cac-6a7eedfd7601-clustermesh-secrets\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:21:04.920522 kubelet[1411]: E1002 19:21:04.920335 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:05.249049 kubelet[1411]: I1002 19:21:05.249024 1411 scope.go:115] "RemoveContainer" containerID="f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452" Oct 2 19:21:05.250088 env[1105]: time="2023-10-02T19:21:05.250040348Z" level=info msg="RemoveContainer for \"f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452\"" Oct 2 19:21:05.252617 systemd[1]: Removed slice kubepods-burstable-pod0773655e_b50d_44c3_8cac_6a7eedfd7601.slice. Oct 2 19:21:05.253644 env[1105]: time="2023-10-02T19:21:05.252788427Z" level=info msg="RemoveContainer for \"f9df53db075d346232400eec9189ed49cf5cdd95ebd07235ea81e68482c65452\" returns successfully" Oct 2 19:21:05.917876 kubelet[1411]: I1002 19:21:05.917826 1411 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=0773655e-b50d-44c3-8cac-6a7eedfd7601 path="/var/lib/kubelet/pods/0773655e-b50d-44c3-8cac-6a7eedfd7601/volumes" Oct 2 19:21:05.920745 kubelet[1411]: E1002 19:21:05.920704 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:05.971761 kubelet[1411]: E1002 19:21:05.971711 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:06.413034 kubelet[1411]: I1002 19:21:06.412974 1411 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:21:06.413034 kubelet[1411]: E1002 19:21:06.413040 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.413034 kubelet[1411]: E1002 19:21:06.413050 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.413034 kubelet[1411]: E1002 19:21:06.413056 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.413034 kubelet[1411]: E1002 19:21:06.413062 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.413482 kubelet[1411]: I1002 19:21:06.413094 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.413482 kubelet[1411]: I1002 19:21:06.413100 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.413482 kubelet[1411]: I1002 19:21:06.413106 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.413482 kubelet[1411]: I1002 19:21:06.413111 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.417989 systemd[1]: Created slice kubepods-besteffort-pod0b2ddaef_90dd_463f_b1aa_c465e711b575.slice. Oct 2 19:21:06.442207 kubelet[1411]: I1002 19:21:06.442151 1411 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:21:06.442207 kubelet[1411]: E1002 19:21:06.442227 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.442487 kubelet[1411]: I1002 19:21:06.442256 1411 memory_manager.go:346] "RemoveStaleState removing state" podUID="0773655e-b50d-44c3-8cac-6a7eedfd7601" containerName="mount-cgroup" Oct 2 19:21:06.448505 systemd[1]: Created slice kubepods-burstable-pod4aaf6d66_99df_40ca_8c76_eb56c9f8a21c.slice. Oct 2 19:21:06.610825 kubelet[1411]: I1002 19:21:06.610769 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-run\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611074 kubelet[1411]: I1002 19:21:06.610838 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-host-proc-sys-net\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611074 kubelet[1411]: I1002 19:21:06.610877 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-etc-cni-netd\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611074 kubelet[1411]: I1002 19:21:06.610905 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-clustermesh-secrets\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611074 kubelet[1411]: I1002 19:21:06.610927 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-ipsec-secrets\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611074 kubelet[1411]: I1002 19:21:06.610950 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-host-proc-sys-kernel\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611074 kubelet[1411]: I1002 19:21:06.610975 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-bpf-maps\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611066 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-cgroup\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611118 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cni-path\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611134 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-lib-modules\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611149 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-xtables-lock\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611167 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-hubble-tls\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611186 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jlhz\" (UniqueName: \"kubernetes.io/projected/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-kube-api-access-4jlhz\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611225 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b2ddaef-90dd-463f-b1aa-c465e711b575-cilium-config-path\") pod \"cilium-operator-574c4bb98d-56wfd\" (UID: \"0b2ddaef-90dd-463f-b1aa-c465e711b575\") " pod="kube-system/cilium-operator-574c4bb98d-56wfd" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611242 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-hostproc\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611261 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxbgq\" (UniqueName: \"kubernetes.io/projected/0b2ddaef-90dd-463f-b1aa-c465e711b575-kube-api-access-bxbgq\") pod \"cilium-operator-574c4bb98d-56wfd\" (UID: \"0b2ddaef-90dd-463f-b1aa-c465e711b575\") " pod="kube-system/cilium-operator-574c4bb98d-56wfd" Oct 2 19:21:06.611327 kubelet[1411]: I1002 19:21:06.611303 1411 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-config-path\") pod \"cilium-2cm8n\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " pod="kube-system/cilium-2cm8n" Oct 2 19:21:06.758499 kubelet[1411]: E1002 19:21:06.758463 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:06.759150 env[1105]: time="2023-10-02T19:21:06.759104145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cm8n,Uid:4aaf6d66-99df-40ca-8c76-eb56c9f8a21c,Namespace:kube-system,Attempt:0,}" Oct 2 19:21:06.921373 kubelet[1411]: E1002 19:21:06.921304 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:06.974666 env[1105]: time="2023-10-02T19:21:06.974582181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:21:06.974666 env[1105]: time="2023-10-02T19:21:06.974621305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:21:06.974666 env[1105]: time="2023-10-02T19:21:06.974631615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:21:06.974976 env[1105]: time="2023-10-02T19:21:06.974751000Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080 pid=1958 runtime=io.containerd.runc.v2 Oct 2 19:21:06.985426 systemd[1]: Started cri-containerd-36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080.scope. Oct 2 19:21:06.995000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.995000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000767 kernel: audit: type=1400 audit(1696274466.995:646): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000824 kernel: audit: type=1400 audit(1696274466.995:647): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000845 kernel: audit: type=1400 audit(1696274466.995:648): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.995000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.005238 kernel: audit: type=1400 audit(1696274466.995:649): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.005286 kernel: audit: type=1400 audit(1696274466.995:650): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.995000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.995000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.006456 kernel: audit: type=1400 audit(1696274466.995:651): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.995000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.995000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.010286 kernel: audit: type=1400 audit(1696274466.995:652): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.010335 kernel: audit: type=1400 audit(1696274466.995:653): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.995000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.995000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.999000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:06.999000 audit: BPF prog-id=75 op=LOAD Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1958 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336363430363838666334633337633263306466383666646139643762 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1958 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336363430363838666334633337633263306466383666646139643762 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.000000 audit: BPF prog-id=76 op=LOAD Oct 2 19:21:07.000000 audit[1966]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000024bd0 items=0 ppid=1958 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336363430363838666334633337633263306466383666646139643762 Oct 2 19:21:07.001000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.001000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.001000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.001000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.001000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.001000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.001000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.001000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.001000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.001000 audit: BPF prog-id=77 op=LOAD Oct 2 19:21:07.001000 audit[1966]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000024c18 items=0 ppid=1958 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336363430363838666334633337633263306466383666646139643762 Oct 2 19:21:07.003000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:21:07.003000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { perfmon } for pid=1966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit[1966]: AVC avc: denied { bpf } for pid=1966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.003000 audit: BPF prog-id=78 op=LOAD Oct 2 19:21:07.003000 audit[1966]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000025028 items=0 ppid=1958 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336363430363838666334633337633263306466383666646139643762 Oct 2 19:21:07.020442 kubelet[1411]: E1002 19:21:07.020407 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:07.021286 env[1105]: time="2023-10-02T19:21:07.021250257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-56wfd,Uid:0b2ddaef-90dd-463f-b1aa-c465e711b575,Namespace:kube-system,Attempt:0,}" Oct 2 19:21:07.025262 env[1105]: time="2023-10-02T19:21:07.025197846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cm8n,Uid:4aaf6d66-99df-40ca-8c76-eb56c9f8a21c,Namespace:kube-system,Attempt:0,} returns sandbox id \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\"" Oct 2 19:21:07.026247 kubelet[1411]: E1002 19:21:07.026115 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:07.028377 env[1105]: time="2023-10-02T19:21:07.028336942Z" level=info msg="CreateContainer within sandbox \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:21:07.036028 env[1105]: time="2023-10-02T19:21:07.035956501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:21:07.036028 env[1105]: time="2023-10-02T19:21:07.036002707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:21:07.036028 env[1105]: time="2023-10-02T19:21:07.036014900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:21:07.036261 env[1105]: time="2023-10-02T19:21:07.036167809Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8 pid=1999 runtime=io.containerd.runc.v2 Oct 2 19:21:07.040769 env[1105]: time="2023-10-02T19:21:07.040703265Z" level=info msg="CreateContainer within sandbox \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\"" Oct 2 19:21:07.041343 env[1105]: time="2023-10-02T19:21:07.041313095Z" level=info msg="StartContainer for \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\"" Oct 2 19:21:07.046112 systemd[1]: Started cri-containerd-61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8.scope. Oct 2 19:21:07.055588 systemd[1]: Started cri-containerd-013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100.scope. Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.059000 audit: BPF prog-id=79 op=LOAD Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643035313461376333363637353833353435333863386637613665 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643035313461376333363637353833353435333863386637613665 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit: BPF prog-id=80 op=LOAD Oct 2 19:21:07.060000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000214600 items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643035313461376333363637353833353435333863386637613665 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit: BPF prog-id=81 op=LOAD Oct 2 19:21:07.060000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000214648 items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643035313461376333363637353833353435333863386637613665 Oct 2 19:21:07.060000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:21:07.060000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:07.060000 audit: BPF prog-id=82 op=LOAD Oct 2 19:21:07.060000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000214a58 items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:07.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631643035313461376333363637353833353435333863386637613665 Oct 2 19:21:07.066985 systemd[1]: cri-containerd-013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100.scope: Deactivated successfully. Oct 2 19:21:07.085312 env[1105]: time="2023-10-02T19:21:07.085233050Z" level=info msg="shim disconnected" id=013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100 Oct 2 19:21:07.085312 env[1105]: time="2023-10-02T19:21:07.085298464Z" level=warning msg="cleaning up after shim disconnected" id=013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100 namespace=k8s.io Oct 2 19:21:07.085312 env[1105]: time="2023-10-02T19:21:07.085311759Z" level=info msg="cleaning up dead shim" Oct 2 19:21:07.089317 env[1105]: time="2023-10-02T19:21:07.089260660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-56wfd,Uid:0b2ddaef-90dd-463f-b1aa-c465e711b575,Namespace:kube-system,Attempt:0,} returns sandbox id \"61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8\"" Oct 2 19:21:07.090024 kubelet[1411]: E1002 19:21:07.089996 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:07.091937 env[1105]: time="2023-10-02T19:21:07.091892039Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:21:07.092864 env[1105]: time="2023-10-02T19:21:07.092822643Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:21:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2056 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:21:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:21:07.093175 env[1105]: time="2023-10-02T19:21:07.093112218Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 19:21:07.093501 env[1105]: time="2023-10-02T19:21:07.093437682Z" level=error msg="Failed to pipe stdout of container \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\"" error="reading from a closed fifo" Oct 2 19:21:07.093765 env[1105]: time="2023-10-02T19:21:07.093441579Z" level=error msg="Failed to pipe stderr of container \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\"" error="reading from a closed fifo" Oct 2 19:21:07.096304 env[1105]: time="2023-10-02T19:21:07.096248379Z" level=error msg="StartContainer for \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:21:07.096636 kubelet[1411]: E1002 19:21:07.096601 1411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100" Oct 2 19:21:07.096739 kubelet[1411]: E1002 19:21:07.096721 1411 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:21:07.096739 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:21:07.096739 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:21:07.096739 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4jlhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:21:07.096943 kubelet[1411]: E1002 19:21:07.096760 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2cm8n" podUID=4aaf6d66-99df-40ca-8c76-eb56c9f8a21c Oct 2 19:21:07.255786 kubelet[1411]: E1002 19:21:07.255745 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:07.257352 env[1105]: time="2023-10-02T19:21:07.257318891Z" level=info msg="CreateContainer within sandbox \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:21:07.272554 env[1105]: time="2023-10-02T19:21:07.272434066Z" level=info msg="CreateContainer within sandbox \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\"" Oct 2 19:21:07.273173 env[1105]: time="2023-10-02T19:21:07.273145276Z" level=info msg="StartContainer for \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\"" Oct 2 19:21:07.287300 systemd[1]: Started cri-containerd-c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738.scope. Oct 2 19:21:07.295615 systemd[1]: cri-containerd-c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738.scope: Deactivated successfully. Oct 2 19:21:07.295951 systemd[1]: Stopped cri-containerd-c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738.scope. Oct 2 19:21:07.302926 env[1105]: time="2023-10-02T19:21:07.302860023Z" level=info msg="shim disconnected" id=c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738 Oct 2 19:21:07.302926 env[1105]: time="2023-10-02T19:21:07.302924394Z" level=warning msg="cleaning up after shim disconnected" id=c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738 namespace=k8s.io Oct 2 19:21:07.303131 env[1105]: time="2023-10-02T19:21:07.302935935Z" level=info msg="cleaning up dead shim" Oct 2 19:21:07.309542 env[1105]: time="2023-10-02T19:21:07.309467354Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:21:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2093 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:21:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:21:07.309813 env[1105]: time="2023-10-02T19:21:07.309747562Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:21:07.309963 env[1105]: time="2023-10-02T19:21:07.309919005Z" level=error msg="Failed to pipe stdout of container \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\"" error="reading from a closed fifo" Oct 2 19:21:07.310070 env[1105]: time="2023-10-02T19:21:07.309968418Z" level=error msg="Failed to pipe stderr of container \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\"" error="reading from a closed fifo" Oct 2 19:21:07.312172 env[1105]: time="2023-10-02T19:21:07.312131835Z" level=error msg="StartContainer for \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:21:07.312431 kubelet[1411]: E1002 19:21:07.312405 1411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738" Oct 2 19:21:07.312579 kubelet[1411]: E1002 19:21:07.312546 1411 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:21:07.312579 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:21:07.312579 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:21:07.312579 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4jlhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:21:07.312755 kubelet[1411]: E1002 19:21:07.312593 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2cm8n" podUID=4aaf6d66-99df-40ca-8c76-eb56c9f8a21c Oct 2 19:21:07.922014 kubelet[1411]: E1002 19:21:07.921961 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:08.258627 kubelet[1411]: I1002 19:21:08.258580 1411 scope.go:115] "RemoveContainer" containerID="013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100" Oct 2 19:21:08.259243 kubelet[1411]: I1002 19:21:08.259205 1411 scope.go:115] "RemoveContainer" containerID="013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100" Oct 2 19:21:08.259986 env[1105]: time="2023-10-02T19:21:08.259944954Z" level=info msg="RemoveContainer for \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\"" Oct 2 19:21:08.260313 env[1105]: time="2023-10-02T19:21:08.260022260Z" level=info msg="RemoveContainer for \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\"" Oct 2 19:21:08.260313 env[1105]: time="2023-10-02T19:21:08.260097873Z" level=error msg="RemoveContainer for \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\" failed" error="failed to set removing state for container \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\": container is already in removing state" Oct 2 19:21:08.260387 kubelet[1411]: E1002 19:21:08.260253 1411 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\": container is already in removing state" containerID="013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100" Oct 2 19:21:08.260387 kubelet[1411]: I1002 19:21:08.260280 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100} err="rpc error: code = Unknown desc = failed to set removing state for container \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\": container is already in removing state" Oct 2 19:21:08.298518 env[1105]: time="2023-10-02T19:21:08.298449733Z" level=info msg="RemoveContainer for \"013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100\" returns successfully" Oct 2 19:21:08.298856 kubelet[1411]: E1002 19:21:08.298823 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:08.299129 kubelet[1411]: E1002 19:21:08.299102 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c)\"" pod="kube-system/cilium-2cm8n" podUID=4aaf6d66-99df-40ca-8c76-eb56c9f8a21c Oct 2 19:21:08.596503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872488927.mount: Deactivated successfully. Oct 2 19:21:08.922799 kubelet[1411]: E1002 19:21:08.922615 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:09.208732 env[1105]: time="2023-10-02T19:21:09.208554682Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:21:09.210570 env[1105]: time="2023-10-02T19:21:09.210528602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:21:09.212047 env[1105]: time="2023-10-02T19:21:09.212001868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:21:09.212452 env[1105]: time="2023-10-02T19:21:09.212424134Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 2 19:21:09.214257 env[1105]: time="2023-10-02T19:21:09.214224967Z" level=info msg="CreateContainer within sandbox \"61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:21:09.228826 env[1105]: time="2023-10-02T19:21:09.228752833Z" level=info msg="CreateContainer within sandbox \"61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\"" Oct 2 19:21:09.232677 env[1105]: time="2023-10-02T19:21:09.232596636Z" level=info msg="StartContainer for \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\"" Oct 2 19:21:09.253974 systemd[1]: Started cri-containerd-2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7.scope. Oct 2 19:21:09.267256 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:21:09.267397 kernel: audit: type=1400 audit(1696274469.262:682): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.267422 kernel: audit: type=1400 audit(1696274469.262:683): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.270348 kernel: audit: type=1400 audit(1696274469.262:684): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.270397 kernel: audit: type=1400 audit(1696274469.262:685): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.274157 kernel: audit: type=1400 audit(1696274469.262:686): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.274194 kernel: audit: type=1400 audit(1696274469.262:687): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.278083 kernel: audit: type=1400 audit(1696274469.262:688): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.278136 kernel: audit: type=1400 audit(1696274469.262:689): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.280016 kernel: audit: type=1400 audit(1696274469.262:690): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.283986 kernel: audit: type=1400 audit(1696274469.262:691): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit: BPF prog-id=83 op=LOAD Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1999 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:09.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264336438626633333065656333346132333032613733633761653135 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1999 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:09.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264336438626633333065656333346132333032613733633761653135 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.262000 audit: BPF prog-id=84 op=LOAD Oct 2 19:21:09.262000 audit[2112]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0001e1b30 items=0 ppid=1999 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:09.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264336438626633333065656333346132333032613733633761653135 Oct 2 19:21:09.265000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.265000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.265000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.265000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.265000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.265000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.265000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.265000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.265000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.265000 audit: BPF prog-id=85 op=LOAD Oct 2 19:21:09.265000 audit[2112]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0001e1b78 items=0 ppid=1999 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:09.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264336438626633333065656333346132333032613733633761653135 Oct 2 19:21:09.269000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:21:09.269000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { perfmon } for pid=2112 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit[2112]: AVC avc: denied { bpf } for pid=2112 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:21:09.269000 audit: BPF prog-id=86 op=LOAD Oct 2 19:21:09.269000 audit[2112]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0001e1f88 items=0 ppid=1999 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:21:09.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264336438626633333065656333346132333032613733633761653135 Oct 2 19:21:09.294146 env[1105]: time="2023-10-02T19:21:09.294076765Z" level=info msg="StartContainer for \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\" returns successfully" Oct 2 19:21:09.313000 audit[2123]: AVC avc: denied { map_create } for pid=2123 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c7,c257 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c7,c257 tclass=bpf permissive=0 Oct 2 19:21:09.313000 audit[2123]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0002717d0 a2=48 a3=c0002717c0 items=0 ppid=1999 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c7,c257 key=(null) Oct 2 19:21:09.313000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:21:09.923541 kubelet[1411]: E1002 19:21:09.923482 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:10.190978 kubelet[1411]: W1002 19:21:10.190723 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aaf6d66_99df_40ca_8c76_eb56c9f8a21c.slice/cri-containerd-013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100.scope WatchSource:0}: container "013f5ac524876eb54080ea46fd450a8da3328c94f09bf2c2b4b4168b41929100" in namespace "k8s.io": not found Oct 2 19:21:10.266389 kubelet[1411]: E1002 19:21:10.266357 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:10.388916 kubelet[1411]: I1002 19:21:10.388853 1411 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-56wfd" podStartSLOduration=2.26764774 podCreationTimestamp="2023-10-02 19:21:06 +0000 UTC" firstStartedPulling="2023-10-02 19:21:07.09158451 +0000 UTC m=+191.667534153" lastFinishedPulling="2023-10-02 19:21:09.212715753 +0000 UTC m=+193.788665396" observedRunningTime="2023-10-02 19:21:10.388749728 +0000 UTC m=+194.964699381" watchObservedRunningTime="2023-10-02 19:21:10.388778983 +0000 UTC m=+194.964728626" Oct 2 19:21:10.924149 kubelet[1411]: E1002 19:21:10.924080 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:10.972538 kubelet[1411]: E1002 19:21:10.972496 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:11.267919 kubelet[1411]: E1002 19:21:11.267890 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:11.924620 kubelet[1411]: E1002 19:21:11.924563 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:12.925682 kubelet[1411]: E1002 19:21:12.925599 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:13.296879 kubelet[1411]: W1002 19:21:13.296821 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aaf6d66_99df_40ca_8c76_eb56c9f8a21c.slice/cri-containerd-c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738.scope WatchSource:0}: task c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738 not found: not found Oct 2 19:21:13.926115 kubelet[1411]: E1002 19:21:13.926049 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:14.927004 kubelet[1411]: E1002 19:21:14.926885 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:15.787579 kubelet[1411]: E1002 19:21:15.787506 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:15.927685 kubelet[1411]: E1002 19:21:15.927634 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:15.974048 kubelet[1411]: E1002 19:21:15.973998 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:16.928532 kubelet[1411]: E1002 19:21:16.928402 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:17.929538 kubelet[1411]: E1002 19:21:17.929489 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:18.930234 kubelet[1411]: E1002 19:21:18.930154 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:19.931092 kubelet[1411]: E1002 19:21:19.931044 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:20.932207 kubelet[1411]: E1002 19:21:20.932138 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:20.975033 kubelet[1411]: E1002 19:21:20.974994 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:21.916015 kubelet[1411]: E1002 19:21:21.915948 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:21.918098 env[1105]: time="2023-10-02T19:21:21.918059935Z" level=info msg="CreateContainer within sandbox \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:21:21.933641 kubelet[1411]: E1002 19:21:21.933561 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:21.937764 env[1105]: time="2023-10-02T19:21:21.937706031Z" level=info msg="CreateContainer within sandbox \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\"" Oct 2 19:21:21.938324 env[1105]: time="2023-10-02T19:21:21.938285523Z" level=info msg="StartContainer for \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\"" Oct 2 19:21:21.953664 systemd[1]: Started cri-containerd-4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4.scope. Oct 2 19:21:21.961955 systemd[1]: cri-containerd-4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4.scope: Deactivated successfully. Oct 2 19:21:21.962252 systemd[1]: Stopped cri-containerd-4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4.scope. Oct 2 19:21:22.228608 env[1105]: time="2023-10-02T19:21:22.228460084Z" level=info msg="shim disconnected" id=4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4 Oct 2 19:21:22.228608 env[1105]: time="2023-10-02T19:21:22.228543101Z" level=warning msg="cleaning up after shim disconnected" id=4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4 namespace=k8s.io Oct 2 19:21:22.228608 env[1105]: time="2023-10-02T19:21:22.228558309Z" level=info msg="cleaning up dead shim" Oct 2 19:21:22.234850 env[1105]: time="2023-10-02T19:21:22.234794958Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:21:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2169 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:21:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:21:22.235167 env[1105]: time="2023-10-02T19:21:22.235097727Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:21:22.235409 env[1105]: time="2023-10-02T19:21:22.235339212Z" level=error msg="Failed to pipe stderr of container \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\"" error="reading from a closed fifo" Oct 2 19:21:22.236302 env[1105]: time="2023-10-02T19:21:22.236269095Z" level=error msg="Failed to pipe stdout of container \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\"" error="reading from a closed fifo" Oct 2 19:21:22.301606 env[1105]: time="2023-10-02T19:21:22.301536183Z" level=error msg="StartContainer for \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:21:22.301799 kubelet[1411]: E1002 19:21:22.301704 1411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4" Oct 2 19:21:22.301874 kubelet[1411]: E1002 19:21:22.301808 1411 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:21:22.301874 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:21:22.301874 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:21:22.301874 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4jlhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:21:22.301874 kubelet[1411]: E1002 19:21:22.301842 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2cm8n" podUID=4aaf6d66-99df-40ca-8c76-eb56c9f8a21c Oct 2 19:21:22.303842 kubelet[1411]: I1002 19:21:22.303804 1411 scope.go:115] "RemoveContainer" containerID="c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738" Oct 2 19:21:22.304094 kubelet[1411]: I1002 19:21:22.304080 1411 scope.go:115] "RemoveContainer" containerID="c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738" Oct 2 19:21:22.305081 env[1105]: time="2023-10-02T19:21:22.305053198Z" level=info msg="RemoveContainer for \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\"" Oct 2 19:21:22.305445 env[1105]: time="2023-10-02T19:21:22.305357541Z" level=info msg="RemoveContainer for \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\"" Oct 2 19:21:22.305521 env[1105]: time="2023-10-02T19:21:22.305482967Z" level=error msg="RemoveContainer for \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\" failed" error="failed to set removing state for container \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\": container is already in removing state" Oct 2 19:21:22.305606 kubelet[1411]: E1002 19:21:22.305589 1411 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\": container is already in removing state" containerID="c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738" Oct 2 19:21:22.305666 kubelet[1411]: E1002 19:21:22.305613 1411 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738": container is already in removing state; Skipping pod "cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c)" Oct 2 19:21:22.305666 kubelet[1411]: E1002 19:21:22.305657 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:22.305836 kubelet[1411]: E1002 19:21:22.305823 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c)\"" pod="kube-system/cilium-2cm8n" podUID=4aaf6d66-99df-40ca-8c76-eb56c9f8a21c Oct 2 19:21:22.372454 env[1105]: time="2023-10-02T19:21:22.372376038Z" level=info msg="RemoveContainer for \"c8f7c1ea5c4083b353555c6234a24e6cfe3e44377ab5ca946a534228d592a738\" returns successfully" Oct 2 19:21:22.927968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4-rootfs.mount: Deactivated successfully. Oct 2 19:21:22.934516 kubelet[1411]: E1002 19:21:22.934453 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:23.935330 kubelet[1411]: E1002 19:21:23.935275 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:24.935584 kubelet[1411]: E1002 19:21:24.935521 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:25.334469 kubelet[1411]: W1002 19:21:25.334408 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aaf6d66_99df_40ca_8c76_eb56c9f8a21c.slice/cri-containerd-4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4.scope WatchSource:0}: task 4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4 not found: not found Oct 2 19:21:25.936128 kubelet[1411]: E1002 19:21:25.936084 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:25.975713 kubelet[1411]: E1002 19:21:25.975678 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:26.937159 kubelet[1411]: E1002 19:21:26.937107 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:27.937897 kubelet[1411]: E1002 19:21:27.937832 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:28.939034 kubelet[1411]: E1002 19:21:28.938957 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:29.940063 kubelet[1411]: E1002 19:21:29.940009 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:30.940895 kubelet[1411]: E1002 19:21:30.940812 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:30.976593 kubelet[1411]: E1002 19:21:30.976566 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:31.941315 kubelet[1411]: E1002 19:21:31.941267 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:32.941849 kubelet[1411]: E1002 19:21:32.941783 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:33.942698 kubelet[1411]: E1002 19:21:33.942624 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:34.943654 kubelet[1411]: E1002 19:21:34.943571 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:35.788117 kubelet[1411]: E1002 19:21:35.788054 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:35.944613 kubelet[1411]: E1002 19:21:35.944530 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:35.977554 kubelet[1411]: E1002 19:21:35.977525 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:36.916434 kubelet[1411]: E1002 19:21:36.916396 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:36.916646 kubelet[1411]: E1002 19:21:36.916620 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c)\"" pod="kube-system/cilium-2cm8n" podUID=4aaf6d66-99df-40ca-8c76-eb56c9f8a21c Oct 2 19:21:36.945701 kubelet[1411]: E1002 19:21:36.945662 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:37.946756 kubelet[1411]: E1002 19:21:37.946718 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:38.946947 kubelet[1411]: E1002 19:21:38.946870 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:39.947535 kubelet[1411]: E1002 19:21:39.947478 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:40.947661 kubelet[1411]: E1002 19:21:40.947600 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:40.978349 kubelet[1411]: E1002 19:21:40.978331 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:41.947725 kubelet[1411]: E1002 19:21:41.947679 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:42.948572 kubelet[1411]: E1002 19:21:42.948501 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:43.949299 kubelet[1411]: E1002 19:21:43.949236 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:44.950121 kubelet[1411]: E1002 19:21:44.950018 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:45.950666 kubelet[1411]: E1002 19:21:45.950541 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:45.979601 kubelet[1411]: E1002 19:21:45.979539 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:46.950902 kubelet[1411]: E1002 19:21:46.950837 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:47.916325 kubelet[1411]: E1002 19:21:47.916261 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:47.918403 env[1105]: time="2023-10-02T19:21:47.918355386Z" level=info msg="CreateContainer within sandbox \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:21:47.929486 env[1105]: time="2023-10-02T19:21:47.929432031Z" level=info msg="CreateContainer within sandbox \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b\"" Oct 2 19:21:47.929844 env[1105]: time="2023-10-02T19:21:47.929796380Z" level=info msg="StartContainer for \"24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b\"" Oct 2 19:21:47.945800 systemd[1]: run-containerd-runc-k8s.io-24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b-runc.drkPZJ.mount: Deactivated successfully. Oct 2 19:21:47.947028 systemd[1]: Started cri-containerd-24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b.scope. Oct 2 19:21:47.951184 kubelet[1411]: E1002 19:21:47.951143 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:47.955299 systemd[1]: cri-containerd-24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b.scope: Deactivated successfully. Oct 2 19:21:47.955571 systemd[1]: Stopped cri-containerd-24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b.scope. Oct 2 19:21:47.964750 env[1105]: time="2023-10-02T19:21:47.964694369Z" level=info msg="shim disconnected" id=24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b Oct 2 19:21:47.964750 env[1105]: time="2023-10-02T19:21:47.964751057Z" level=warning msg="cleaning up after shim disconnected" id=24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b namespace=k8s.io Oct 2 19:21:47.964941 env[1105]: time="2023-10-02T19:21:47.964761849Z" level=info msg="cleaning up dead shim" Oct 2 19:21:47.971761 env[1105]: time="2023-10-02T19:21:47.971704954Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:21:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2209 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:21:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:21:47.971999 env[1105]: time="2023-10-02T19:21:47.971945206Z" level=error msg="copy shim log" error="read /proc/self/fd/49: file already closed" Oct 2 19:21:47.972155 env[1105]: time="2023-10-02T19:21:47.972115532Z" level=error msg="Failed to pipe stdout of container \"24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b\"" error="reading from a closed fifo" Oct 2 19:21:47.974820 env[1105]: time="2023-10-02T19:21:47.974746486Z" level=error msg="Failed to pipe stderr of container \"24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b\"" error="reading from a closed fifo" Oct 2 19:21:47.977029 env[1105]: time="2023-10-02T19:21:47.976984875Z" level=error msg="StartContainer for \"24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:21:47.977329 kubelet[1411]: E1002 19:21:47.977305 1411 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b" Oct 2 19:21:47.977482 kubelet[1411]: E1002 19:21:47.977455 1411 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:21:47.977482 kubelet[1411]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:21:47.977482 kubelet[1411]: rm /hostbin/cilium-mount Oct 2 19:21:47.977482 kubelet[1411]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4jlhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:21:47.977644 kubelet[1411]: E1002 19:21:47.977509 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2cm8n" podUID=4aaf6d66-99df-40ca-8c76-eb56c9f8a21c Oct 2 19:21:48.350046 kubelet[1411]: I1002 19:21:48.350012 1411 scope.go:115] "RemoveContainer" containerID="4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4" Oct 2 19:21:48.350512 kubelet[1411]: I1002 19:21:48.350474 1411 scope.go:115] "RemoveContainer" containerID="4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4" Oct 2 19:21:48.351106 env[1105]: time="2023-10-02T19:21:48.351061374Z" level=info msg="RemoveContainer for \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\"" Oct 2 19:21:48.351316 env[1105]: time="2023-10-02T19:21:48.351287619Z" level=info msg="RemoveContainer for \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\"" Oct 2 19:21:48.351380 env[1105]: time="2023-10-02T19:21:48.351351952Z" level=error msg="RemoveContainer for \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\" failed" error="failed to set removing state for container \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\": container is already in removing state" Oct 2 19:21:48.351521 kubelet[1411]: E1002 19:21:48.351491 1411 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\": container is already in removing state" containerID="4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4" Oct 2 19:21:48.351586 kubelet[1411]: E1002 19:21:48.351528 1411 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4": container is already in removing state; Skipping pod "cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c)" Oct 2 19:21:48.351616 kubelet[1411]: E1002 19:21:48.351607 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:48.351835 kubelet[1411]: E1002 19:21:48.351819 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c)\"" pod="kube-system/cilium-2cm8n" podUID=4aaf6d66-99df-40ca-8c76-eb56c9f8a21c Oct 2 19:21:48.422248 env[1105]: time="2023-10-02T19:21:48.422172872Z" level=info msg="RemoveContainer for \"4194910f7fbec100360bd4427191d6ab441376881817f7d9da82dfd2592f18e4\" returns successfully" Oct 2 19:21:48.926435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b-rootfs.mount: Deactivated successfully. Oct 2 19:21:48.951368 kubelet[1411]: E1002 19:21:48.951301 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:49.951898 kubelet[1411]: E1002 19:21:49.951849 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:50.952309 kubelet[1411]: E1002 19:21:50.952193 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:50.981072 kubelet[1411]: E1002 19:21:50.981042 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:51.070117 kubelet[1411]: W1002 19:21:51.070058 1411 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aaf6d66_99df_40ca_8c76_eb56c9f8a21c.slice/cri-containerd-24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b.scope WatchSource:0}: task 24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b not found: not found Oct 2 19:21:51.952822 kubelet[1411]: E1002 19:21:51.952772 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:52.953364 kubelet[1411]: E1002 19:21:52.953290 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:53.954156 kubelet[1411]: E1002 19:21:53.954109 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:54.954480 kubelet[1411]: E1002 19:21:54.954407 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:55.787897 kubelet[1411]: E1002 19:21:55.787827 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:55.799970 env[1105]: time="2023-10-02T19:21:55.799926974Z" level=info msg="StopPodSandbox for \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\"" Oct 2 19:21:55.800266 env[1105]: time="2023-10-02T19:21:55.800022377Z" level=info msg="TearDown network for sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" successfully" Oct 2 19:21:55.800266 env[1105]: time="2023-10-02T19:21:55.800056522Z" level=info msg="StopPodSandbox for \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" returns successfully" Oct 2 19:21:55.800420 env[1105]: time="2023-10-02T19:21:55.800385383Z" level=info msg="RemovePodSandbox for \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\"" Oct 2 19:21:55.800460 env[1105]: time="2023-10-02T19:21:55.800419227Z" level=info msg="Forcibly stopping sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\"" Oct 2 19:21:55.800498 env[1105]: time="2023-10-02T19:21:55.800482048Z" level=info msg="TearDown network for sandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" successfully" Oct 2 19:21:55.803641 env[1105]: time="2023-10-02T19:21:55.803597830Z" level=info msg="RemovePodSandbox \"20b14a723b09b10c0019fd31e1d3f5a210308fb829470bcd0502939610944620\" returns successfully" Oct 2 19:21:55.954534 kubelet[1411]: E1002 19:21:55.954489 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:55.981603 kubelet[1411]: E1002 19:21:55.981572 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:21:56.955429 kubelet[1411]: E1002 19:21:56.955375 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:57.955621 kubelet[1411]: E1002 19:21:57.955578 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:58.956558 kubelet[1411]: E1002 19:21:58.956493 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:21:59.916360 kubelet[1411]: E1002 19:21:59.916318 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:21:59.916572 kubelet[1411]: E1002 19:21:59.916541 1411 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-2cm8n_kube-system(4aaf6d66-99df-40ca-8c76-eb56c9f8a21c)\"" pod="kube-system/cilium-2cm8n" podUID=4aaf6d66-99df-40ca-8c76-eb56c9f8a21c Oct 2 19:21:59.957528 kubelet[1411]: E1002 19:21:59.957480 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:00.958040 kubelet[1411]: E1002 19:22:00.957991 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:00.982799 kubelet[1411]: E1002 19:22:00.982783 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:01.959019 kubelet[1411]: E1002 19:22:01.958973 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:02.959296 kubelet[1411]: E1002 19:22:02.959247 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:03.960308 kubelet[1411]: E1002 19:22:03.960264 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:04.961089 kubelet[1411]: E1002 19:22:04.961039 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:05.962056 kubelet[1411]: E1002 19:22:05.962022 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:05.983202 kubelet[1411]: E1002 19:22:05.983184 1411 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:22:06.916290 kubelet[1411]: E1002 19:22:06.916252 1411 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:22:06.962646 kubelet[1411]: E1002 19:22:06.962614 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:07.727722 env[1105]: time="2023-10-02T19:22:07.727666681Z" level=info msg="StopPodSandbox for \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\"" Oct 2 19:22:07.728094 env[1105]: time="2023-10-02T19:22:07.727748417Z" level=info msg="Container to stop \"24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:22:07.729097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080-shm.mount: Deactivated successfully. Oct 2 19:22:07.733303 systemd[1]: cri-containerd-36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080.scope: Deactivated successfully. Oct 2 19:22:07.733000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:22:07.734235 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:22:07.734322 kernel: audit: type=1334 audit(1696274527.733:701): prog-id=75 op=UNLOAD Oct 2 19:22:07.738000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:22:07.740234 kernel: audit: type=1334 audit(1696274527.738:702): prog-id=78 op=UNLOAD Oct 2 19:22:07.740797 env[1105]: time="2023-10-02T19:22:07.740757145Z" level=info msg="StopContainer for \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\" with timeout 30 (s)" Oct 2 19:22:07.741190 env[1105]: time="2023-10-02T19:22:07.741165476Z" level=info msg="Stop container \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\" with signal terminated" Oct 2 19:22:07.750838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080-rootfs.mount: Deactivated successfully. Oct 2 19:22:07.751434 systemd[1]: cri-containerd-2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7.scope: Deactivated successfully. Oct 2 19:22:07.751000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:22:07.753233 kernel: audit: type=1334 audit(1696274527.751:703): prog-id=83 op=UNLOAD Oct 2 19:22:07.755000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:22:07.756231 kernel: audit: type=1334 audit(1696274527.755:704): prog-id=86 op=UNLOAD Oct 2 19:22:07.765881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7-rootfs.mount: Deactivated successfully. Oct 2 19:22:07.767336 env[1105]: time="2023-10-02T19:22:07.767297441Z" level=info msg="shim disconnected" id=2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7 Oct 2 19:22:07.767499 env[1105]: time="2023-10-02T19:22:07.767470892Z" level=warning msg="cleaning up after shim disconnected" id=2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7 namespace=k8s.io Oct 2 19:22:07.767499 env[1105]: time="2023-10-02T19:22:07.767487924Z" level=info msg="cleaning up dead shim" Oct 2 19:22:07.768706 env[1105]: time="2023-10-02T19:22:07.768666558Z" level=info msg="shim disconnected" id=36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080 Oct 2 19:22:07.768762 env[1105]: time="2023-10-02T19:22:07.768706945Z" level=warning msg="cleaning up after shim disconnected" id=36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080 namespace=k8s.io Oct 2 19:22:07.768762 env[1105]: time="2023-10-02T19:22:07.768714470Z" level=info msg="cleaning up dead shim" Oct 2 19:22:07.774655 env[1105]: time="2023-10-02T19:22:07.774607605Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2265 runtime=io.containerd.runc.v2\n" Oct 2 19:22:07.774921 env[1105]: time="2023-10-02T19:22:07.774892339Z" level=info msg="TearDown network for sandbox \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" successfully" Oct 2 19:22:07.774921 env[1105]: time="2023-10-02T19:22:07.774916596Z" level=info msg="StopPodSandbox for \"36640688fc4c37c2c0df86fda9d7bdf30174842a78c4576a51ab752943288080\" returns successfully" Oct 2 19:22:07.774988 env[1105]: time="2023-10-02T19:22:07.774630048Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2260 runtime=io.containerd.runc.v2\n" Oct 2 19:22:07.777815 env[1105]: time="2023-10-02T19:22:07.777786482Z" level=info msg="StopContainer for \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\" returns successfully" Oct 2 19:22:07.778184 env[1105]: time="2023-10-02T19:22:07.778157001Z" level=info msg="StopPodSandbox for \"61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8\"" Oct 2 19:22:07.778263 env[1105]: time="2023-10-02T19:22:07.778226473Z" level=info msg="Container to stop \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:22:07.779333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8-shm.mount: Deactivated successfully. Oct 2 19:22:07.785339 systemd[1]: cri-containerd-61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8.scope: Deactivated successfully. Oct 2 19:22:07.785000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:22:07.787234 kernel: audit: type=1334 audit(1696274527.785:705): prog-id=79 op=UNLOAD Oct 2 19:22:07.789000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:22:07.790231 kernel: audit: type=1334 audit(1696274527.789:706): prog-id=82 op=UNLOAD Oct 2 19:22:07.799469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8-rootfs.mount: Deactivated successfully. Oct 2 19:22:07.806758 env[1105]: time="2023-10-02T19:22:07.806706147Z" level=info msg="shim disconnected" id=61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8 Oct 2 19:22:07.806879 env[1105]: time="2023-10-02T19:22:07.806759078Z" level=warning msg="cleaning up after shim disconnected" id=61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8 namespace=k8s.io Oct 2 19:22:07.806879 env[1105]: time="2023-10-02T19:22:07.806767664Z" level=info msg="cleaning up dead shim" Oct 2 19:22:07.813154 env[1105]: time="2023-10-02T19:22:07.813102294Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:22:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2303 runtime=io.containerd.runc.v2\n" Oct 2 19:22:07.813410 env[1105]: time="2023-10-02T19:22:07.813378592Z" level=info msg="TearDown network for sandbox \"61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8\" successfully" Oct 2 19:22:07.813410 env[1105]: time="2023-10-02T19:22:07.813401536Z" level=info msg="StopPodSandbox for \"61d0514a7c366758354538c8f7a6e6f13f69243309fb1f393a1d94fedd3882b8\" returns successfully" Oct 2 19:22:07.888873 kubelet[1411]: I1002 19:22:07.888843 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-bpf-maps\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888883 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-cgroup\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888911 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cni-path\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888921 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888933 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-xtables-lock\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888946 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888953 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-hostproc\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888959 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cni-path" (OuterVolumeSpecName: "cni-path") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888972 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-run\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888972 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-hostproc" (OuterVolumeSpecName: "hostproc") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888983 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888993 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-etc-cni-netd\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.888998 kubelet[1411]: I1002 19:22:07.888999 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889011 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889021 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-host-proc-sys-kernel\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889053 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-clustermesh-secrets\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889069 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-ipsec-secrets\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889068 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889086 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b2ddaef-90dd-463f-b1aa-c465e711b575-cilium-config-path\") pod \"0b2ddaef-90dd-463f-b1aa-c465e711b575\" (UID: \"0b2ddaef-90dd-463f-b1aa-c465e711b575\") " Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889121 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-host-proc-sys-net\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889151 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jlhz\" (UniqueName: \"kubernetes.io/projected/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-kube-api-access-4jlhz\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889175 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxbgq\" (UniqueName: \"kubernetes.io/projected/0b2ddaef-90dd-463f-b1aa-c465e711b575-kube-api-access-bxbgq\") pod \"0b2ddaef-90dd-463f-b1aa-c465e711b575\" (UID: \"0b2ddaef-90dd-463f-b1aa-c465e711b575\") " Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889201 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-config-path\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889241 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-lib-modules\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.889526 kubelet[1411]: W1002 19:22:07.889225 1411 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0b2ddaef-90dd-463f-b1aa-c465e711b575/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889271 1411 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-hubble-tls\") pod \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\" (UID: \"4aaf6d66-99df-40ca-8c76-eb56c9f8a21c\") " Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889295 1411 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-xtables-lock\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889309 1411 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-hostproc\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889321 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-run\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.889526 kubelet[1411]: I1002 19:22:07.889334 1411 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-etc-cni-netd\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.890203 kubelet[1411]: I1002 19:22:07.889344 1411 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-bpf-maps\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.890203 kubelet[1411]: I1002 19:22:07.889356 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-cgroup\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.890203 kubelet[1411]: I1002 19:22:07.889368 1411 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cni-path\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.890203 kubelet[1411]: I1002 19:22:07.889380 1411 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-host-proc-sys-kernel\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.890203 kubelet[1411]: I1002 19:22:07.889577 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.890203 kubelet[1411]: W1002 19:22:07.889677 1411 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:22:07.890203 kubelet[1411]: I1002 19:22:07.889745 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:22:07.891899 kubelet[1411]: I1002 19:22:07.890895 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b2ddaef-90dd-463f-b1aa-c465e711b575-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b2ddaef-90dd-463f-b1aa-c465e711b575" (UID: "0b2ddaef-90dd-463f-b1aa-c465e711b575"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:22:07.891899 kubelet[1411]: I1002 19:22:07.891869 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:22:07.892171 kubelet[1411]: I1002 19:22:07.892144 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:22:07.892778 kubelet[1411]: I1002 19:22:07.892752 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:22:07.893626 kubelet[1411]: I1002 19:22:07.893599 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:07.893795 kubelet[1411]: I1002 19:22:07.893774 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2ddaef-90dd-463f-b1aa-c465e711b575-kube-api-access-bxbgq" (OuterVolumeSpecName: "kube-api-access-bxbgq") pod "0b2ddaef-90dd-463f-b1aa-c465e711b575" (UID: "0b2ddaef-90dd-463f-b1aa-c465e711b575"). InnerVolumeSpecName "kube-api-access-bxbgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:07.894367 kubelet[1411]: I1002 19:22:07.894342 1411 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-kube-api-access-4jlhz" (OuterVolumeSpecName: "kube-api-access-4jlhz") pod "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c" (UID: "4aaf6d66-99df-40ca-8c76-eb56c9f8a21c"). InnerVolumeSpecName "kube-api-access-4jlhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:22:07.919962 systemd[1]: Removed slice kubepods-burstable-pod4aaf6d66_99df_40ca_8c76_eb56c9f8a21c.slice. Oct 2 19:22:07.921140 systemd[1]: Removed slice kubepods-besteffort-pod0b2ddaef_90dd_463f_b1aa_c465e711b575.slice. Oct 2 19:22:07.963587 kubelet[1411]: E1002 19:22:07.963545 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:22:07.989858 kubelet[1411]: I1002 19:22:07.989828 1411 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bxbgq\" (UniqueName: \"kubernetes.io/projected/0b2ddaef-90dd-463f-b1aa-c465e711b575-kube-api-access-bxbgq\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.989858 kubelet[1411]: I1002 19:22:07.989857 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-config-path\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.989964 kubelet[1411]: I1002 19:22:07.989868 1411 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-lib-modules\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.989964 kubelet[1411]: I1002 19:22:07.989877 1411 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-hubble-tls\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.989964 kubelet[1411]: I1002 19:22:07.989886 1411 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-clustermesh-secrets\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.989964 kubelet[1411]: I1002 19:22:07.989895 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-cilium-ipsec-secrets\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.989964 kubelet[1411]: I1002 19:22:07.989906 1411 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b2ddaef-90dd-463f-b1aa-c465e711b575-cilium-config-path\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.989964 kubelet[1411]: I1002 19:22:07.989918 1411 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-host-proc-sys-net\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:07.989964 kubelet[1411]: I1002 19:22:07.989947 1411 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4jlhz\" (UniqueName: \"kubernetes.io/projected/4aaf6d66-99df-40ca-8c76-eb56c9f8a21c-kube-api-access-4jlhz\") on node \"10.0.0.149\" DevicePath \"\"" Oct 2 19:22:08.382659 kubelet[1411]: I1002 19:22:08.382574 1411 scope.go:115] "RemoveContainer" containerID="2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7" Oct 2 19:22:08.383557 env[1105]: time="2023-10-02T19:22:08.383518621Z" level=info msg="RemoveContainer for \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\"" Oct 2 19:22:08.386823 env[1105]: time="2023-10-02T19:22:08.386786666Z" level=info msg="RemoveContainer for \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\" returns successfully" Oct 2 19:22:08.386951 kubelet[1411]: I1002 19:22:08.386918 1411 scope.go:115] "RemoveContainer" containerID="2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7" Oct 2 19:22:08.387175 env[1105]: time="2023-10-02T19:22:08.387106288Z" level=error msg="ContainerStatus for \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\": not found" Oct 2 19:22:08.387305 kubelet[1411]: E1002 19:22:08.387291 1411 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\": not found" containerID="2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7" Oct 2 19:22:08.387364 kubelet[1411]: I1002 19:22:08.387317 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7} err="failed to get container status \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d3d8bf330eec34a2302a73c7ae1594d0c7ceb4965d0736c185bd19703b512a7\": not found" Oct 2 19:22:08.387364 kubelet[1411]: I1002 19:22:08.387326 1411 scope.go:115] "RemoveContainer" containerID="24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b" Oct 2 19:22:08.388070 env[1105]: time="2023-10-02T19:22:08.388045883Z" level=info msg="RemoveContainer for \"24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b\"" Oct 2 19:22:08.391093 env[1105]: time="2023-10-02T19:22:08.391072308Z" level=info msg="RemoveContainer for \"24c68e98639c25b46bb8d492eeb365c0ac0350f35bc16cd422069d5febd0ee7b\" returns successfully" Oct 2 19:22:08.728896 systemd[1]: var-lib-kubelet-pods-4aaf6d66\x2d99df\x2d40ca\x2d8c76\x2deb56c9f8a21c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jlhz.mount: Deactivated successfully. Oct 2 19:22:08.728987 systemd[1]: var-lib-kubelet-pods-0b2ddaef\x2d90dd\x2d463f\x2db1aa\x2dc465e711b575-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbxbgq.mount: Deactivated successfully. Oct 2 19:22:08.729044 systemd[1]: var-lib-kubelet-pods-4aaf6d66\x2d99df\x2d40ca\x2d8c76\x2deb56c9f8a21c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:22:08.729100 systemd[1]: var-lib-kubelet-pods-4aaf6d66\x2d99df\x2d40ca\x2d8c76\x2deb56c9f8a21c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:22:08.729148 systemd[1]: var-lib-kubelet-pods-4aaf6d66\x2d99df\x2d40ca\x2d8c76\x2deb56c9f8a21c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:22:08.964575 kubelet[1411]: E1002 19:22:08.964544 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"