Oct 2 19:37:31.771872 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:37:31.771896 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:37:31.771909 kernel: BIOS-provided physical RAM map: Oct 2 19:37:31.771917 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 2 19:37:31.771924 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 2 19:37:31.771931 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 2 19:37:31.771940 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 2 19:37:31.771948 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 2 19:37:31.771956 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 2 19:37:31.771966 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 2 19:37:31.771973 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 2 19:37:31.771981 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 2 19:37:31.771989 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 2 19:37:31.771997 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 2 19:37:31.772006 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 2 19:37:31.772016 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 2 19:37:31.772024 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 2 19:37:31.772032 kernel: NX (Execute Disable) protection: active Oct 2 19:37:31.772040 kernel: e820: update [mem 0x9b3f9018-0x9b402c57] usable ==> usable Oct 2 19:37:31.772049 kernel: e820: update [mem 0x9b3f9018-0x9b402c57] usable ==> usable Oct 2 19:37:31.772057 kernel: e820: update [mem 0x9b1ac018-0x9b1e8e57] usable ==> usable Oct 2 19:37:31.772065 kernel: e820: update [mem 0x9b1ac018-0x9b1e8e57] usable ==> usable Oct 2 19:37:31.772073 kernel: extended physical RAM map: Oct 2 19:37:31.772081 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 2 19:37:31.772089 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 2 19:37:31.772099 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 2 19:37:31.772107 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 2 19:37:31.772115 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 2 19:37:31.772123 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 2 19:37:31.772142 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 2 19:37:31.772152 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1ac017] usable Oct 2 19:37:31.772176 kernel: reserve setup_data: [mem 0x000000009b1ac018-0x000000009b1e8e57] usable Oct 2 19:37:31.772185 kernel: reserve setup_data: [mem 0x000000009b1e8e58-0x000000009b3f9017] usable Oct 2 19:37:31.772193 kernel: reserve setup_data: [mem 0x000000009b3f9018-0x000000009b402c57] usable Oct 2 19:37:31.772201 kernel: reserve setup_data: [mem 0x000000009b402c58-0x000000009c8eefff] usable Oct 2 19:37:31.772209 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 2 19:37:31.772220 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 2 19:37:31.772228 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 2 19:37:31.772236 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 2 19:37:31.772245 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 2 19:37:31.772257 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 2 19:37:31.772265 kernel: efi: EFI v2.70 by EDK II Oct 2 19:37:31.772274 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Oct 2 19:37:31.772284 kernel: random: crng init done Oct 2 19:37:31.772293 kernel: SMBIOS 2.8 present. Oct 2 19:37:31.772302 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Oct 2 19:37:31.772311 kernel: Hypervisor detected: KVM Oct 2 19:37:31.772319 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:37:31.772328 kernel: kvm-clock: cpu 0, msr 62f8a001, primary cpu clock Oct 2 19:37:31.772336 kernel: kvm-clock: using sched offset of 3920851135 cycles Oct 2 19:37:31.772346 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:37:31.772355 kernel: tsc: Detected 2794.748 MHz processor Oct 2 19:37:31.772366 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:37:31.772375 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:37:31.772384 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 2 19:37:31.772394 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:37:31.772403 kernel: Using GB pages for direct mapping Oct 2 19:37:31.772412 kernel: Secure boot disabled Oct 2 19:37:31.772421 kernel: ACPI: Early table checksum verification disabled Oct 2 19:37:31.772430 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 2 19:37:31.772439 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:37:31.772451 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:37:31.772460 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:37:31.772469 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 2 19:37:31.772478 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:37:31.772487 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:37:31.772496 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:37:31.772505 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 2 19:37:31.772514 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Oct 2 19:37:31.772523 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Oct 2 19:37:31.772534 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 2 19:37:31.772543 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Oct 2 19:37:31.772552 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Oct 2 19:37:31.772562 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Oct 2 19:37:31.772571 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Oct 2 19:37:31.772580 kernel: No NUMA configuration found Oct 2 19:37:31.772589 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 2 19:37:31.772598 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 2 19:37:31.772607 kernel: Zone ranges: Oct 2 19:37:31.772618 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:37:31.772627 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 2 19:37:31.772636 kernel: Normal empty Oct 2 19:37:31.772645 kernel: Movable zone start for each node Oct 2 19:37:31.772654 kernel: Early memory node ranges Oct 2 19:37:31.772663 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 2 19:37:31.772679 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 2 19:37:31.772689 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 2 19:37:31.772698 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 2 19:37:31.772709 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 2 19:37:31.772730 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 2 19:37:31.772739 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 2 19:37:31.772748 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:37:31.772758 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 2 19:37:31.772767 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 2 19:37:31.772776 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:37:31.772785 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 2 19:37:31.772794 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 2 19:37:31.772806 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 2 19:37:31.772815 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 2 19:37:31.772824 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:37:31.772833 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:37:31.772842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:37:31.772851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:37:31.772860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:37:31.772870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:37:31.772879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:37:31.772890 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:37:31.772899 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:37:31.772908 kernel: TSC deadline timer available Oct 2 19:37:31.772917 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 2 19:37:31.772926 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 2 19:37:31.772935 kernel: kvm-guest: setup PV sched yield Oct 2 19:37:31.772945 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Oct 2 19:37:31.772954 kernel: Booting paravirtualized kernel on KVM Oct 2 19:37:31.772963 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:37:31.772972 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 2 19:37:31.772984 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Oct 2 19:37:31.772993 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Oct 2 19:37:31.773025 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 2 19:37:31.773043 kernel: kvm-guest: setup async PF for cpu 0 Oct 2 19:37:31.773052 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Oct 2 19:37:31.773061 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:37:31.773075 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:37:31.773084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 2 19:37:31.773093 kernel: Policy zone: DMA32 Oct 2 19:37:31.773104 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:37:31.773114 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:37:31.773126 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:37:31.773135 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:37:31.773144 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:37:31.773155 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 166304K reserved, 0K cma-reserved) Oct 2 19:37:31.773178 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:37:31.773192 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:37:31.773201 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:37:31.773211 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:37:31.773221 kernel: rcu: RCU event tracing is enabled. Oct 2 19:37:31.773231 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:37:31.773240 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:37:31.773250 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:37:31.773270 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:37:31.773287 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:37:31.773299 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 2 19:37:31.773308 kernel: Console: colour dummy device 80x25 Oct 2 19:37:31.773318 kernel: printk: console [ttyS0] enabled Oct 2 19:37:31.773328 kernel: ACPI: Core revision 20210730 Oct 2 19:37:31.773338 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 2 19:37:31.773348 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:37:31.773357 kernel: x2apic enabled Oct 2 19:37:31.773367 kernel: Switched APIC routing to physical x2apic. Oct 2 19:37:31.773377 kernel: kvm-guest: setup PV IPIs Oct 2 19:37:31.773389 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:37:31.773399 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:37:31.773409 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 2 19:37:31.773419 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 2 19:37:31.773428 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 2 19:37:31.773438 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 2 19:37:31.773448 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:37:31.773458 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:37:31.773468 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:37:31.773480 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:37:31.773489 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 2 19:37:31.773499 kernel: RETBleed: Mitigation: untrained return thunk Oct 2 19:37:31.773508 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:37:31.773518 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:37:31.773548 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:37:31.773559 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:37:31.773569 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:37:31.773581 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:37:31.773591 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:37:31.773601 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:37:31.773611 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:37:31.773621 kernel: LSM: Security Framework initializing Oct 2 19:37:31.773631 kernel: SELinux: Initializing. Oct 2 19:37:31.773641 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:37:31.773651 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:37:31.773661 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 2 19:37:31.773681 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 2 19:37:31.773692 kernel: ... version: 0 Oct 2 19:37:31.773702 kernel: ... bit width: 48 Oct 2 19:37:31.773711 kernel: ... generic registers: 6 Oct 2 19:37:31.773721 kernel: ... value mask: 0000ffffffffffff Oct 2 19:37:31.773731 kernel: ... max period: 00007fffffffffff Oct 2 19:37:31.773740 kernel: ... fixed-purpose events: 0 Oct 2 19:37:31.773749 kernel: ... event mask: 000000000000003f Oct 2 19:37:31.773759 kernel: signal: max sigframe size: 1776 Oct 2 19:37:31.773768 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:37:31.773780 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:37:31.773790 kernel: x86: Booting SMP configuration: Oct 2 19:37:31.773799 kernel: .... node #0, CPUs: #1 Oct 2 19:37:31.773809 kernel: kvm-clock: cpu 1, msr 62f8a041, secondary cpu clock Oct 2 19:37:31.773819 kernel: kvm-guest: setup async PF for cpu 1 Oct 2 19:37:31.773829 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Oct 2 19:37:31.773839 kernel: #2 Oct 2 19:37:31.773850 kernel: kvm-clock: cpu 2, msr 62f8a081, secondary cpu clock Oct 2 19:37:31.773860 kernel: kvm-guest: setup async PF for cpu 2 Oct 2 19:37:31.773871 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Oct 2 19:37:31.773881 kernel: #3 Oct 2 19:37:31.773891 kernel: kvm-clock: cpu 3, msr 62f8a0c1, secondary cpu clock Oct 2 19:37:31.773901 kernel: kvm-guest: setup async PF for cpu 3 Oct 2 19:37:31.773911 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Oct 2 19:37:31.773920 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:37:31.773930 kernel: smpboot: Max logical packages: 1 Oct 2 19:37:31.773939 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 2 19:37:31.773948 kernel: devtmpfs: initialized Oct 2 19:37:31.773960 kernel: x86/mm: Memory block size: 128MB Oct 2 19:37:31.773970 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 2 19:37:31.773980 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 2 19:37:31.773990 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 2 19:37:31.774000 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 2 19:37:31.774010 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 2 19:37:31.774020 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:37:31.774030 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:37:31.774040 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:37:31.774052 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:37:31.774062 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:37:31.774072 kernel: audit: type=2000 audit(1696275450.957:1): state=initialized audit_enabled=0 res=1 Oct 2 19:37:31.774082 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:37:31.774092 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:37:31.774101 kernel: cpuidle: using governor menu Oct 2 19:37:31.774111 kernel: ACPI: bus type PCI registered Oct 2 19:37:31.774121 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:37:31.774130 kernel: dca service started, version 1.12.1 Oct 2 19:37:31.774141 kernel: PCI: Using configuration type 1 for base access Oct 2 19:37:31.774151 kernel: PCI: Using configuration type 1 for extended access Oct 2 19:37:31.774174 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:37:31.774185 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:37:31.774195 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:37:31.774205 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:37:31.774215 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:37:31.774225 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:37:31.774234 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:37:31.774247 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:37:31.774257 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:37:31.774267 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:37:31.774277 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:37:31.774286 kernel: ACPI: Interpreter enabled Oct 2 19:37:31.774296 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:37:31.774306 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:37:31.774316 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:37:31.774326 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:37:31.774338 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:37:31.774479 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:37:31.774496 kernel: acpiphp: Slot [3] registered Oct 2 19:37:31.774506 kernel: acpiphp: Slot [4] registered Oct 2 19:37:31.774515 kernel: acpiphp: Slot [5] registered Oct 2 19:37:31.774525 kernel: acpiphp: Slot [6] registered Oct 2 19:37:31.774535 kernel: acpiphp: Slot [7] registered Oct 2 19:37:31.774544 kernel: acpiphp: Slot [8] registered Oct 2 19:37:31.774553 kernel: acpiphp: Slot [9] registered Oct 2 19:37:31.774565 kernel: acpiphp: Slot [10] registered Oct 2 19:37:31.774574 kernel: acpiphp: Slot [11] registered Oct 2 19:37:31.774584 kernel: acpiphp: Slot [12] registered Oct 2 19:37:31.774593 kernel: acpiphp: Slot [13] registered Oct 2 19:37:31.774603 kernel: acpiphp: Slot [14] registered Oct 2 19:37:31.774612 kernel: acpiphp: Slot [15] registered Oct 2 19:37:31.774622 kernel: acpiphp: Slot [16] registered Oct 2 19:37:31.774632 kernel: acpiphp: Slot [17] registered Oct 2 19:37:31.774642 kernel: acpiphp: Slot [18] registered Oct 2 19:37:31.774653 kernel: acpiphp: Slot [19] registered Oct 2 19:37:31.774663 kernel: acpiphp: Slot [20] registered Oct 2 19:37:31.774680 kernel: acpiphp: Slot [21] registered Oct 2 19:37:31.774713 kernel: acpiphp: Slot [22] registered Oct 2 19:37:31.774730 kernel: acpiphp: Slot [23] registered Oct 2 19:37:31.774740 kernel: acpiphp: Slot [24] registered Oct 2 19:37:31.774749 kernel: acpiphp: Slot [25] registered Oct 2 19:37:31.774759 kernel: acpiphp: Slot [26] registered Oct 2 19:37:31.774772 kernel: acpiphp: Slot [27] registered Oct 2 19:37:31.774784 kernel: acpiphp: Slot [28] registered Oct 2 19:37:31.774793 kernel: acpiphp: Slot [29] registered Oct 2 19:37:31.774802 kernel: acpiphp: Slot [30] registered Oct 2 19:37:31.774812 kernel: acpiphp: Slot [31] registered Oct 2 19:37:31.774821 kernel: PCI host bridge to bus 0000:00 Oct 2 19:37:31.774929 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:37:31.775019 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:37:31.775107 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:37:31.775213 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Oct 2 19:37:31.775302 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Oct 2 19:37:31.775387 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:37:31.775500 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:37:31.775608 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:37:31.775727 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:37:31.776582 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Oct 2 19:37:31.776703 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:37:31.776799 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:37:31.776887 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:37:31.776973 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:37:31.777077 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:37:31.777223 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 2 19:37:31.777320 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Oct 2 19:37:31.777415 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Oct 2 19:37:31.777513 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 2 19:37:31.777636 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Oct 2 19:37:31.777748 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 2 19:37:31.777844 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Oct 2 19:37:31.777942 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:37:31.778051 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:37:31.778152 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Oct 2 19:37:31.778272 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 2 19:37:31.778371 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 2 19:37:31.778475 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:37:31.778560 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:37:31.778659 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 2 19:37:31.778777 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 2 19:37:31.778885 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:37:31.778967 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:37:31.779044 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Oct 2 19:37:31.779120 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 2 19:37:31.779210 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 2 19:37:31.779221 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:37:31.779232 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:37:31.779241 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:37:31.779249 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:37:31.779257 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:37:31.779266 kernel: iommu: Default domain type: Translated Oct 2 19:37:31.779274 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:37:31.779351 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:37:31.779426 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:37:31.779501 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:37:31.779513 kernel: vgaarb: loaded Oct 2 19:37:31.779522 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:37:31.779530 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:37:31.779539 kernel: PTP clock support registered Oct 2 19:37:31.779547 kernel: Registered efivars operations Oct 2 19:37:31.779555 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:37:31.779563 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:37:31.779572 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 2 19:37:31.779580 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 2 19:37:31.779589 kernel: e820: reserve RAM buffer [mem 0x9b1ac018-0x9bffffff] Oct 2 19:37:31.779597 kernel: e820: reserve RAM buffer [mem 0x9b3f9018-0x9bffffff] Oct 2 19:37:31.779605 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 2 19:37:31.779614 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 2 19:37:31.779622 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 2 19:37:31.779630 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 2 19:37:31.779639 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:37:31.779647 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:37:31.779656 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:37:31.779665 kernel: pnp: PnP ACPI init Oct 2 19:37:31.779753 kernel: pnp 00:02: [dma 2] Oct 2 19:37:31.779765 kernel: pnp: PnP ACPI: found 6 devices Oct 2 19:37:31.779774 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:37:31.779782 kernel: NET: Registered PF_INET protocol family Oct 2 19:37:31.779791 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:37:31.779799 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:37:31.779808 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:37:31.779818 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:37:31.779827 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:37:31.779835 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:37:31.779843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:37:31.779852 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:37:31.779860 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:37:31.779868 kernel: NET: Registered PF_XDP protocol family Oct 2 19:37:31.779947 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 2 19:37:31.780037 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 2 19:37:31.780109 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:37:31.780190 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:37:31.780255 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:37:31.780313 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Oct 2 19:37:31.780372 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Oct 2 19:37:31.780439 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:37:31.780505 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:37:31.780574 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:37:31.780584 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:37:31.780591 kernel: Initialise system trusted keyrings Oct 2 19:37:31.780598 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:37:31.780606 kernel: Key type asymmetric registered Oct 2 19:37:31.780613 kernel: Asymmetric key parser 'x509' registered Oct 2 19:37:31.780620 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:37:31.780627 kernel: io scheduler mq-deadline registered Oct 2 19:37:31.780634 kernel: io scheduler kyber registered Oct 2 19:37:31.780643 kernel: io scheduler bfq registered Oct 2 19:37:31.780650 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:37:31.780658 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:37:31.780665 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:37:31.780678 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:37:31.780686 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:37:31.780693 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:37:31.780700 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:37:31.780708 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:37:31.780716 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:37:31.780788 kernel: rtc_cmos 00:05: RTC can wake from S4 Oct 2 19:37:31.780800 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:37:31.780860 kernel: rtc_cmos 00:05: registered as rtc0 Oct 2 19:37:31.780923 kernel: rtc_cmos 00:05: setting system clock to 2023-10-02T19:37:31 UTC (1696275451) Oct 2 19:37:31.780984 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 2 19:37:31.780993 kernel: efifb: probing for efifb Oct 2 19:37:31.781000 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 2 19:37:31.781007 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 2 19:37:31.781014 kernel: efifb: scrolling: redraw Oct 2 19:37:31.781022 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 2 19:37:31.781029 kernel: Console: switching to colour frame buffer device 160x50 Oct 2 19:37:31.781036 kernel: fb0: EFI VGA frame buffer device Oct 2 19:37:31.781045 kernel: pstore: Registered efi as persistent store backend Oct 2 19:37:31.781053 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:37:31.781060 kernel: Segment Routing with IPv6 Oct 2 19:37:31.781067 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:37:31.781074 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:37:31.781081 kernel: Key type dns_resolver registered Oct 2 19:37:31.781089 kernel: IPI shorthand broadcast: enabled Oct 2 19:37:31.781096 kernel: sched_clock: Marking stable (356308335, 89226305)->(464394621, -18859981) Oct 2 19:37:31.781103 kernel: registered taskstats version 1 Oct 2 19:37:31.781112 kernel: Loading compiled-in X.509 certificates Oct 2 19:37:31.781119 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:37:31.781126 kernel: Key type .fscrypt registered Oct 2 19:37:31.781133 kernel: Key type fscrypt-provisioning registered Oct 2 19:37:31.781142 kernel: pstore: Using crash dump compression: deflate Oct 2 19:37:31.781149 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:37:31.781156 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:37:31.781175 kernel: ima: No architecture policies found Oct 2 19:37:31.781182 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:37:31.781190 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:37:31.781198 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:37:31.781205 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:37:31.781214 kernel: Run /init as init process Oct 2 19:37:31.781225 kernel: with arguments: Oct 2 19:37:31.781235 kernel: /init Oct 2 19:37:31.781245 kernel: with environment: Oct 2 19:37:31.781255 kernel: HOME=/ Oct 2 19:37:31.781264 kernel: TERM=linux Oct 2 19:37:31.781274 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:37:31.781288 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:37:31.781302 systemd[1]: Detected virtualization kvm. Oct 2 19:37:31.781313 systemd[1]: Detected architecture x86-64. Oct 2 19:37:31.781323 systemd[1]: Running in initrd. Oct 2 19:37:31.781334 systemd[1]: No hostname configured, using default hostname. Oct 2 19:37:31.781344 systemd[1]: Hostname set to . Oct 2 19:37:31.781357 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:37:31.781368 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:37:31.781378 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:37:31.781389 systemd[1]: Reached target cryptsetup.target. Oct 2 19:37:31.781399 systemd[1]: Reached target paths.target. Oct 2 19:37:31.781410 systemd[1]: Reached target slices.target. Oct 2 19:37:31.781420 systemd[1]: Reached target swap.target. Oct 2 19:37:31.781431 systemd[1]: Reached target timers.target. Oct 2 19:37:31.781444 systemd[1]: Listening on iscsid.socket. Oct 2 19:37:31.781454 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:37:31.781465 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:37:31.781476 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:37:31.781487 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:37:31.781497 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:37:31.781508 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:37:31.781518 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:37:31.781529 systemd[1]: Reached target sockets.target. Oct 2 19:37:31.781542 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:37:31.781552 systemd[1]: Finished network-cleanup.service. Oct 2 19:37:31.781563 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:37:31.781574 systemd[1]: Starting systemd-journald.service... Oct 2 19:37:31.781585 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:37:31.781596 systemd[1]: Starting systemd-resolved.service... Oct 2 19:37:31.781606 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:37:31.781617 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:37:31.781627 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:37:31.781640 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:37:31.781651 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:37:31.781662 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:37:31.781681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:37:31.781693 kernel: audit: type=1130 audit(1696275451.778:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.781707 systemd-journald[198]: Journal started Oct 2 19:37:31.781762 systemd-journald[198]: Runtime Journal (/run/log/journal/141a5a0ab5184baab53110b281bdaca7) is 6.0M, max 48.4M, 42.4M free. Oct 2 19:37:31.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.773288 systemd-modules-load[199]: Inserted module 'overlay' Oct 2 19:37:31.782838 systemd[1]: Started systemd-journald.service. Oct 2 19:37:31.782856 kernel: audit: type=1130 audit(1696275451.782:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.794545 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:37:31.797760 kernel: audit: type=1130 audit(1696275451.794:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.797347 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:37:31.800182 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:37:31.802347 systemd-modules-load[199]: Inserted module 'br_netfilter' Oct 2 19:37:31.803199 kernel: Bridge firewalling registered Oct 2 19:37:31.804869 dracut-cmdline[217]: dracut-dracut-053 Oct 2 19:37:31.806867 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:37:31.812245 systemd-resolved[200]: Positive Trust Anchors: Oct 2 19:37:31.813058 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:37:31.814334 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:37:31.820189 kernel: SCSI subsystem initialized Oct 2 19:37:31.821797 systemd-resolved[200]: Defaulting to hostname 'linux'. Oct 2 19:37:31.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.823390 systemd[1]: Started systemd-resolved.service. Oct 2 19:37:31.827274 kernel: audit: type=1130 audit(1696275451.823:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.824173 systemd[1]: Reached target nss-lookup.target. Oct 2 19:37:31.834071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:37:31.834103 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:37:31.834120 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:37:31.837138 systemd-modules-load[199]: Inserted module 'dm_multipath' Oct 2 19:37:31.838321 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:37:31.841218 kernel: audit: type=1130 audit(1696275451.838:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.839131 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:37:31.845283 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:37:31.848263 kernel: audit: type=1130 audit(1696275451.845:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.857190 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:37:31.867202 kernel: iscsi: registered transport (tcp) Oct 2 19:37:31.885180 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:37:31.885200 kernel: QLogic iSCSI HBA Driver Oct 2 19:37:31.902998 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:37:31.906439 kernel: audit: type=1130 audit(1696275451.903:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:31.904443 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:37:31.946185 kernel: raid6: avx2x4 gen() 30870 MB/s Oct 2 19:37:31.963179 kernel: raid6: avx2x4 xor() 8320 MB/s Oct 2 19:37:31.980192 kernel: raid6: avx2x2 gen() 32579 MB/s Oct 2 19:37:31.997188 kernel: raid6: avx2x2 xor() 19218 MB/s Oct 2 19:37:32.014187 kernel: raid6: avx2x1 gen() 26560 MB/s Oct 2 19:37:32.031189 kernel: raid6: avx2x1 xor() 15351 MB/s Oct 2 19:37:32.048192 kernel: raid6: sse2x4 gen() 14730 MB/s Oct 2 19:37:32.065192 kernel: raid6: sse2x4 xor() 7024 MB/s Oct 2 19:37:32.082190 kernel: raid6: sse2x2 gen() 16054 MB/s Oct 2 19:37:32.099190 kernel: raid6: sse2x2 xor() 9536 MB/s Oct 2 19:37:32.116188 kernel: raid6: sse2x1 gen() 11679 MB/s Oct 2 19:37:32.133613 kernel: raid6: sse2x1 xor() 7588 MB/s Oct 2 19:37:32.133692 kernel: raid6: using algorithm avx2x2 gen() 32579 MB/s Oct 2 19:37:32.133703 kernel: raid6: .... xor() 19218 MB/s, rmw enabled Oct 2 19:37:32.133712 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:37:32.145186 kernel: xor: automatically using best checksumming function avx Oct 2 19:37:32.236190 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:37:32.243042 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:37:32.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:32.245000 audit: BPF prog-id=7 op=LOAD Oct 2 19:37:32.246653 kernel: audit: type=1130 audit(1696275452.243:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:32.246683 kernel: audit: type=1334 audit(1696275452.245:10): prog-id=7 op=LOAD Oct 2 19:37:32.246000 audit: BPF prog-id=8 op=LOAD Oct 2 19:37:32.247004 systemd[1]: Starting systemd-udevd.service... Oct 2 19:37:32.260950 systemd-udevd[401]: Using default interface naming scheme 'v252'. Oct 2 19:37:32.265685 systemd[1]: Started systemd-udevd.service. Oct 2 19:37:32.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:32.266592 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:37:32.274471 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Oct 2 19:37:32.295232 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:37:32.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:32.296098 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:37:32.326048 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:37:32.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:32.355186 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:37:32.357191 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:37:32.360197 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:37:32.373182 kernel: libata version 3.00 loaded. Oct 2 19:37:32.377182 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:37:32.379181 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:37:32.380178 kernel: AES CTR mode by8 optimization enabled Oct 2 19:37:32.384201 kernel: scsi host0: ata_piix Oct 2 19:37:32.386345 kernel: scsi host1: ata_piix Oct 2 19:37:32.386452 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Oct 2 19:37:32.386462 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Oct 2 19:37:32.399113 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:37:32.403182 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Oct 2 19:37:32.403376 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:37:32.404101 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:37:32.413085 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:37:32.416543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:37:32.417763 systemd[1]: Starting disk-uuid.service... Oct 2 19:37:32.423201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:37:32.427191 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:37:32.542841 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 2 19:37:32.542899 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 2 19:37:32.570200 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 2 19:37:32.570400 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:37:32.587188 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:37:33.476292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:37:33.476345 disk-uuid[516]: The operation has completed successfully. Oct 2 19:37:33.497508 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:37:33.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.497578 systemd[1]: Finished disk-uuid.service. Oct 2 19:37:33.505275 systemd[1]: Starting verity-setup.service... Oct 2 19:37:33.516198 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:37:33.541876 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:37:33.543871 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:37:33.545319 systemd[1]: Finished verity-setup.service. Oct 2 19:37:33.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.610192 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:37:33.610582 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:37:33.611586 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:37:33.612923 systemd[1]: Starting ignition-setup.service... Oct 2 19:37:33.614248 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:37:33.620451 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:37:33.620483 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:37:33.620497 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:37:33.626753 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:37:33.659904 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:37:33.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.661000 audit: BPF prog-id=9 op=LOAD Oct 2 19:37:33.661815 systemd[1]: Starting systemd-networkd.service... Oct 2 19:37:33.671855 systemd[1]: Finished ignition-setup.service. Oct 2 19:37:33.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.672831 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:37:33.679724 systemd-networkd[688]: lo: Link UP Oct 2 19:37:33.679730 systemd-networkd[688]: lo: Gained carrier Oct 2 19:37:33.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.680075 systemd-networkd[688]: Enumeration completed Oct 2 19:37:33.680129 systemd[1]: Started systemd-networkd.service. Oct 2 19:37:33.680531 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:37:33.681131 systemd[1]: Reached target network.target. Oct 2 19:37:33.681528 systemd-networkd[688]: eth0: Link UP Oct 2 19:37:33.681531 systemd-networkd[688]: eth0: Gained carrier Oct 2 19:37:33.686822 systemd[1]: Starting iscsiuio.service... Oct 2 19:37:33.690237 systemd[1]: Started iscsiuio.service. Oct 2 19:37:33.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.692339 systemd[1]: Starting iscsid.service... Oct 2 19:37:33.694515 systemd-networkd[688]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:37:33.695460 iscsid[695]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:37:33.695460 iscsid[695]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:37:33.695460 iscsid[695]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:37:33.695460 iscsid[695]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:37:33.695460 iscsid[695]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:37:33.695460 iscsid[695]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:37:33.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.700232 systemd[1]: Started iscsid.service. Oct 2 19:37:33.701062 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:37:33.710521 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:37:33.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.711183 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:37:33.712087 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:37:33.712658 systemd[1]: Reached target remote-fs.target. Oct 2 19:37:33.713732 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:37:33.720592 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:37:33.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.769227 ignition[690]: Ignition 2.14.0 Oct 2 19:37:33.769235 ignition[690]: Stage: fetch-offline Oct 2 19:37:33.769278 ignition[690]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:37:33.769285 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:37:33.769384 ignition[690]: parsed url from cmdline: "" Oct 2 19:37:33.769388 ignition[690]: no config URL provided Oct 2 19:37:33.769394 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:37:33.769402 ignition[690]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:37:33.769418 ignition[690]: op(1): [started] loading QEMU firmware config module Oct 2 19:37:33.769422 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:37:33.778644 ignition[690]: op(1): [finished] loading QEMU firmware config module Oct 2 19:37:33.788604 ignition[690]: parsing config with SHA512: 6d6188f376f01c6a459e8deb5653df92706d6ce1eda636366c10b6d9dd4180c57828d116a93d826694858f66719e6688400088931988f760af6bfac1e353f923 Oct 2 19:37:33.804235 unknown[690]: fetched base config from "system" Oct 2 19:37:33.804743 unknown[690]: fetched user config from "qemu" Oct 2 19:37:33.805101 ignition[690]: fetch-offline: fetch-offline passed Oct 2 19:37:33.805157 ignition[690]: Ignition finished successfully Oct 2 19:37:33.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.805948 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:37:33.806767 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:37:33.807374 systemd[1]: Starting ignition-kargs.service... Oct 2 19:37:33.815557 ignition[716]: Ignition 2.14.0 Oct 2 19:37:33.815565 ignition[716]: Stage: kargs Oct 2 19:37:33.815645 ignition[716]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:37:33.815652 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:37:33.817405 systemd[1]: Finished ignition-kargs.service. Oct 2 19:37:33.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.816381 ignition[716]: kargs: kargs passed Oct 2 19:37:33.816425 ignition[716]: Ignition finished successfully Oct 2 19:37:33.819099 systemd[1]: Starting ignition-disks.service... Oct 2 19:37:33.824357 ignition[723]: Ignition 2.14.0 Oct 2 19:37:33.824364 ignition[723]: Stage: disks Oct 2 19:37:33.824434 ignition[723]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:37:33.824441 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:37:33.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.825706 systemd[1]: Finished ignition-disks.service. Oct 2 19:37:33.825222 ignition[723]: disks: disks passed Oct 2 19:37:33.826478 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:37:33.825254 ignition[723]: Ignition finished successfully Oct 2 19:37:33.827664 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:37:33.828196 systemd[1]: Reached target local-fs.target. Oct 2 19:37:33.828694 systemd[1]: Reached target sysinit.target. Oct 2 19:37:33.829593 systemd[1]: Reached target basic.target. Oct 2 19:37:33.830679 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:37:33.838269 systemd-fsck[731]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:37:33.842660 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:37:33.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.844413 systemd[1]: Mounting sysroot.mount... Oct 2 19:37:33.849877 systemd[1]: Mounted sysroot.mount. Oct 2 19:37:33.851218 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:37:33.850362 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:37:33.851785 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:37:33.852465 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:37:33.852492 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:37:33.852509 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:37:33.853718 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:37:33.854847 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:37:33.857735 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:37:33.860269 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:37:33.862409 initrd-setup-root[757]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:37:33.864251 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:37:33.885956 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:37:33.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.887116 systemd[1]: Starting ignition-mount.service... Oct 2 19:37:33.888336 systemd[1]: Starting sysroot-boot.service... Oct 2 19:37:33.890720 bash[782]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:37:33.897906 ignition[783]: INFO : Ignition 2.14.0 Oct 2 19:37:33.897906 ignition[783]: INFO : Stage: mount Oct 2 19:37:33.899309 ignition[783]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:37:33.899309 ignition[783]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:37:33.899309 ignition[783]: INFO : mount: mount passed Oct 2 19:37:33.899309 ignition[783]: INFO : Ignition finished successfully Oct 2 19:37:33.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:33.899679 systemd[1]: Finished ignition-mount.service. Oct 2 19:37:33.905298 systemd[1]: Finished sysroot-boot.service. Oct 2 19:37:33.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:34.554431 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:37:34.559668 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (793) Oct 2 19:37:34.559693 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:37:34.559703 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:37:34.560657 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:37:34.563339 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:37:34.564428 systemd[1]: Starting ignition-files.service... Oct 2 19:37:34.576726 ignition[813]: INFO : Ignition 2.14.0 Oct 2 19:37:34.576726 ignition[813]: INFO : Stage: files Oct 2 19:37:34.578000 ignition[813]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:37:34.578000 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:37:34.579818 ignition[813]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:37:34.579818 ignition[813]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:37:34.579818 ignition[813]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:37:34.582737 ignition[813]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:37:34.582737 ignition[813]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:37:34.582737 ignition[813]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:37:34.582737 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:37:34.582737 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 19:37:34.582184 unknown[813]: wrote ssh authorized keys file for user: core Oct 2 19:37:34.768268 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:37:34.875092 ignition[813]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 19:37:34.877000 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:37:34.877000 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Oct 2 19:37:34.877000 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:37:35.132341 systemd-networkd[688]: eth0: Gained IPv6LL Oct 2 19:37:35.190452 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:37:35.275245 ignition[813]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Oct 2 19:37:35.277263 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Oct 2 19:37:35.277263 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:37:35.277263 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:37:35.371301 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:37:36.007792 ignition[813]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Oct 2 19:37:36.010031 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:37:36.010031 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:37:36.010031 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:37:36.081965 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:37:37.709031 ignition[813]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Oct 2 19:37:37.711609 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:37:37.711609 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:37:37.711609 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:37:37.711609 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:37:37.711609 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:37:37.711609 ignition[813]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:37:37.732386 ignition[813]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:37:37.768580 ignition[813]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:37:37.769682 ignition[813]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:37:37.769682 ignition[813]: INFO : files: op(11): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:37:37.769682 ignition[813]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:37:37.769682 ignition[813]: INFO : files: op(12): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:37:37.769682 ignition[813]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:37:37.769682 ignition[813]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:37:37.769682 ignition[813]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:37:37.769682 ignition[813]: INFO : files: files passed Oct 2 19:37:37.769682 ignition[813]: INFO : Ignition finished successfully Oct 2 19:37:37.777829 systemd[1]: Finished ignition-files.service. Oct 2 19:37:37.781839 kernel: kauditd_printk_skb: 22 callbacks suppressed Oct 2 19:37:37.781859 kernel: audit: type=1130 audit(1696275457.778:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.781837 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:37:37.782031 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:37:37.782544 systemd[1]: Starting ignition-quench.service... Oct 2 19:37:37.785477 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:37:37.785570 systemd[1]: Finished ignition-quench.service. Oct 2 19:37:37.790864 kernel: audit: type=1130 audit(1696275457.786:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.790886 kernel: audit: type=1131 audit(1696275457.786:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.794559 initrd-setup-root-after-ignition[839]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:37:37.796943 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:37:37.798430 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:37:37.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.798693 systemd[1]: Reached target ignition-complete.target. Oct 2 19:37:37.802967 kernel: audit: type=1130 audit(1696275457.798:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.803046 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:37:37.814089 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:37:37.814209 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:37:37.818569 kernel: audit: type=1130 audit(1696275457.814:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.818592 kernel: audit: type=1131 audit(1696275457.814:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.814693 systemd[1]: Reached target initrd-fs.target. Oct 2 19:37:37.819782 systemd[1]: Reached target initrd.target. Oct 2 19:37:37.820727 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:37:37.822014 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:37:37.831888 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:37:37.834731 kernel: audit: type=1130 audit(1696275457.831:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.834740 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:37:37.843514 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:37:37.843831 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:37:37.844777 systemd[1]: Stopped target timers.target. Oct 2 19:37:37.845741 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:37:37.849292 kernel: audit: type=1131 audit(1696275457.846:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.845820 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:37:37.846801 systemd[1]: Stopped target initrd.target. Oct 2 19:37:37.849607 systemd[1]: Stopped target basic.target. Oct 2 19:37:37.849790 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:37:37.849989 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:37:37.850207 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:37:37.853201 systemd[1]: Stopped target remote-fs.target. Oct 2 19:37:37.854258 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:37:37.855252 systemd[1]: Stopped target sysinit.target. Oct 2 19:37:37.855596 systemd[1]: Stopped target local-fs.target. Oct 2 19:37:37.855794 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:37:37.855995 systemd[1]: Stopped target swap.target. Oct 2 19:37:37.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.856188 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:37:37.862426 kernel: audit: type=1131 audit(1696275457.856:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.856265 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:37:37.865215 kernel: audit: type=1131 audit(1696275457.862:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.856567 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:37:37.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.858644 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:37:37.858721 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:37:37.862798 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:37:37.862874 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:37:37.865576 systemd[1]: Stopped target paths.target. Oct 2 19:37:37.866587 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:37:37.870232 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:37:37.870633 systemd[1]: Stopped target slices.target. Oct 2 19:37:37.870813 systemd[1]: Stopped target sockets.target. Oct 2 19:37:37.871025 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:37:37.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.871104 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:37:37.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.873160 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:37:37.873262 systemd[1]: Stopped ignition-files.service. Oct 2 19:37:37.880328 iscsid[695]: iscsid shutting down. Oct 2 19:37:37.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.875105 systemd[1]: Stopping ignition-mount.service... Oct 2 19:37:37.875584 systemd[1]: Stopping iscsid.service... Oct 2 19:37:37.876965 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:37:37.880718 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:37:37.880871 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:37:37.881269 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:37:37.885861 ignition[854]: INFO : Ignition 2.14.0 Oct 2 19:37:37.885861 ignition[854]: INFO : Stage: umount Oct 2 19:37:37.885861 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:37:37.885861 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:37:37.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.881352 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:37:37.890493 ignition[854]: INFO : umount: umount passed Oct 2 19:37:37.890493 ignition[854]: INFO : Ignition finished successfully Oct 2 19:37:37.887080 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:37:37.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.887160 systemd[1]: Stopped iscsid.service. Oct 2 19:37:37.888356 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:37:37.888421 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:37:37.889586 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:37:37.889649 systemd[1]: Stopped ignition-mount.service. Oct 2 19:37:37.890583 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:37:37.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.890612 systemd[1]: Closed iscsid.socket. Oct 2 19:37:37.891464 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:37:37.891493 systemd[1]: Stopped ignition-disks.service. Oct 2 19:37:37.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.892600 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:37:37.892635 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:37:37.893703 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:37:37.893732 systemd[1]: Stopped ignition-setup.service. Oct 2 19:37:37.895220 systemd[1]: Stopping iscsiuio.service... Oct 2 19:37:37.896794 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:37:37.897295 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:37:37.897359 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:37:37.898375 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:37:37.898436 systemd[1]: Stopped iscsiuio.service. Oct 2 19:37:37.899263 systemd[1]: Stopped target network.target. Oct 2 19:37:37.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.900130 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:37:37.900154 systemd[1]: Closed iscsiuio.socket. Oct 2 19:37:37.901175 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:37:37.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.901206 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:37:37.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.902316 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:37:37.903560 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:37:37.907201 systemd-networkd[688]: eth0: DHCPv6 lease lost Oct 2 19:37:37.908459 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:37:37.908534 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:37:37.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.917000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:37:37.910182 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:37:37.910208 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:37:37.911739 systemd[1]: Stopping network-cleanup.service... Oct 2 19:37:37.912235 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:37:37.923000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:37:37.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.912271 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:37:37.913608 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:37:37.913642 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:37:37.914294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:37:37.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.914324 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:37:37.915451 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:37:37.917035 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:37:37.917372 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:37:37.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.917443 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:37:37.922317 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:37:37.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.922384 systemd[1]: Stopped network-cleanup.service. Oct 2 19:37:37.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.924728 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:37:37.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.924827 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:37:37.926406 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:37:37.926435 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:37:37.927418 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:37:37.927442 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:37:37.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:37.928418 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:37:37.928450 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:37:37.929715 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:37:37.929742 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:37:37.930822 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:37:37.930848 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:37:37.932404 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:37:37.932967 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:37:37.933005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:37:37.933645 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:37:37.933677 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:37:37.934589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:37:37.934619 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:37:37.936024 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:37:37.937975 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:37:37.938035 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:37:37.938723 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:37:37.940207 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:37:37.956157 systemd[1]: Switching root. Oct 2 19:37:37.974400 systemd-journald[198]: Journal stopped Oct 2 19:37:43.114294 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Oct 2 19:37:43.114365 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:37:43.114397 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:37:43.114412 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:37:43.114426 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:37:43.114439 kernel: SELinux: policy capability open_perms=1 Oct 2 19:37:43.114453 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:37:43.114466 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:37:43.114484 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:37:43.114497 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:37:43.114510 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:37:43.114523 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:37:43.114537 systemd[1]: Successfully loaded SELinux policy in 44.658ms. Oct 2 19:37:43.114566 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.053ms. Oct 2 19:37:43.114586 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:37:43.114601 systemd[1]: Detected virtualization kvm. Oct 2 19:37:43.114615 systemd[1]: Detected architecture x86-64. Oct 2 19:37:43.114629 systemd[1]: Detected first boot. Oct 2 19:37:43.114643 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:37:43.114657 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:37:43.114672 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:37:43.114690 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:37:43.114706 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:37:43.114722 kernel: kauditd_printk_skb: 39 callbacks suppressed Oct 2 19:37:43.114736 kernel: audit: type=1334 audit(1696275462.995:82): prog-id=12 op=LOAD Oct 2 19:37:43.114749 kernel: audit: type=1334 audit(1696275462.995:83): prog-id=3 op=UNLOAD Oct 2 19:37:43.114762 kernel: audit: type=1334 audit(1696275462.997:84): prog-id=13 op=LOAD Oct 2 19:37:43.114776 kernel: audit: type=1334 audit(1696275462.998:85): prog-id=14 op=LOAD Oct 2 19:37:43.114789 kernel: audit: type=1334 audit(1696275462.998:86): prog-id=4 op=UNLOAD Oct 2 19:37:43.114805 kernel: audit: type=1334 audit(1696275462.999:87): prog-id=5 op=UNLOAD Oct 2 19:37:43.114818 kernel: audit: type=1334 audit(1696275463.001:88): prog-id=15 op=LOAD Oct 2 19:37:43.114831 kernel: audit: type=1334 audit(1696275463.001:89): prog-id=12 op=UNLOAD Oct 2 19:37:43.114844 kernel: audit: type=1334 audit(1696275463.003:90): prog-id=16 op=LOAD Oct 2 19:37:43.114861 kernel: audit: type=1334 audit(1696275463.004:91): prog-id=17 op=LOAD Oct 2 19:37:43.114875 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:37:43.114889 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:37:43.114903 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:37:43.114918 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:37:43.114952 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:37:43.114970 systemd[1]: Created slice system-getty.slice. Oct 2 19:37:43.114985 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:37:43.114999 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:37:43.115013 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:37:43.115026 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:37:43.115040 systemd[1]: Created slice user.slice. Oct 2 19:37:43.115056 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:37:43.115068 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:37:43.115081 systemd[1]: Set up automount boot.automount. Oct 2 19:37:43.115093 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:37:43.115106 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:37:43.115120 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:37:43.115132 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:37:43.115145 systemd[1]: Reached target integritysetup.target. Oct 2 19:37:43.115159 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:37:43.115207 systemd[1]: Reached target remote-fs.target. Oct 2 19:37:43.115224 systemd[1]: Reached target slices.target. Oct 2 19:37:43.115239 systemd[1]: Reached target swap.target. Oct 2 19:37:43.115253 systemd[1]: Reached target torcx.target. Oct 2 19:37:43.115272 systemd[1]: Reached target veritysetup.target. Oct 2 19:37:43.115286 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:37:43.115301 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:37:43.115316 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:37:43.115330 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:37:43.115344 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:37:43.115360 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:37:43.115376 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:37:43.115397 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:37:43.115412 systemd[1]: Mounting media.mount... Oct 2 19:37:43.115426 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:37:43.115443 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:37:43.115459 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:37:43.115474 systemd[1]: Mounting tmp.mount... Oct 2 19:37:43.115488 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:37:43.115505 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:37:43.115519 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:37:43.115534 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:37:43.115549 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:37:43.115563 systemd[1]: Starting modprobe@drm.service... Oct 2 19:37:43.115578 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:37:43.115592 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:37:43.115607 systemd[1]: Starting modprobe@loop.service... Oct 2 19:37:43.115621 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:37:43.115637 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:37:43.115652 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:37:43.115666 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:37:43.115680 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:37:43.115693 systemd[1]: Stopped systemd-journald.service. Oct 2 19:37:43.115708 systemd[1]: Starting systemd-journald.service... Oct 2 19:37:43.115722 kernel: loop: module loaded Oct 2 19:37:43.115736 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:37:43.115750 kernel: fuse: init (API version 7.34) Oct 2 19:37:43.115766 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:37:43.115780 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:37:43.115795 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:37:43.115809 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:37:43.115823 systemd[1]: Stopped verity-setup.service. Oct 2 19:37:43.115839 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:37:43.115853 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:37:43.115868 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:37:43.115882 systemd[1]: Mounted media.mount. Oct 2 19:37:43.115899 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:37:43.115914 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:37:43.115928 systemd[1]: Mounted tmp.mount. Oct 2 19:37:43.115943 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:37:43.115959 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:37:43.115978 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:37:43.115995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:37:43.116009 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:37:43.116026 systemd-journald[957]: Journal started Oct 2 19:37:43.116082 systemd-journald[957]: Runtime Journal (/run/log/journal/141a5a0ab5184baab53110b281bdaca7) is 6.0M, max 48.4M, 42.4M free. Oct 2 19:37:38.052000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:37:40.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:37:40.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:37:40.839000 audit: BPF prog-id=10 op=LOAD Oct 2 19:37:40.839000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:37:40.839000 audit: BPF prog-id=11 op=LOAD Oct 2 19:37:40.839000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:37:42.995000 audit: BPF prog-id=12 op=LOAD Oct 2 19:37:42.995000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:37:42.997000 audit: BPF prog-id=13 op=LOAD Oct 2 19:37:42.998000 audit: BPF prog-id=14 op=LOAD Oct 2 19:37:42.998000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:37:42.999000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:37:43.001000 audit: BPF prog-id=15 op=LOAD Oct 2 19:37:43.001000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:37:43.003000 audit: BPF prog-id=16 op=LOAD Oct 2 19:37:43.004000 audit: BPF prog-id=17 op=LOAD Oct 2 19:37:43.004000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:37:43.004000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:37:43.005000 audit: BPF prog-id=18 op=LOAD Oct 2 19:37:43.005000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:37:43.005000 audit: BPF prog-id=19 op=LOAD Oct 2 19:37:43.005000 audit: BPF prog-id=20 op=LOAD Oct 2 19:37:43.005000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:37:43.005000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:37:43.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.018000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:37:43.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.085000 audit: BPF prog-id=21 op=LOAD Oct 2 19:37:43.085000 audit: BPF prog-id=22 op=LOAD Oct 2 19:37:43.085000 audit: BPF prog-id=23 op=LOAD Oct 2 19:37:43.085000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:37:43.085000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:37:43.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.112000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:37:43.112000 audit[957]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd5851fdf0 a2=4000 a3=7ffd5851fe8c items=0 ppid=1 pid=957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:43.112000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:37:43.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:42.994552 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:37:40.892523 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:37:42.994563 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:37:40.892757 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:37:43.005614 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:37:40.892773 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:37:40.892799 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:37:43.118174 systemd[1]: Started systemd-journald.service. Oct 2 19:37:43.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:40.892808 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:37:40.892834 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:37:40.892845 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:37:43.118538 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:37:40.893039 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:37:40.893077 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:37:40.893088 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:37:40.893392 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:37:40.893422 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:37:40.893445 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:37:40.893458 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:37:40.893471 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:37:40.893482 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:37:42.729949 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:42Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:37:42.730215 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:42Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:37:42.730305 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:42Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:37:42.730457 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:42Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:37:42.730501 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:42Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:37:42.730552 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2023-10-02T19:37:42Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:37:43.120346 systemd[1]: Finished modprobe@drm.service. Oct 2 19:37:43.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.121519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:37:43.121778 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:37:43.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.122685 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:37:43.122890 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:37:43.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.123773 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:37:43.123992 systemd[1]: Finished modprobe@loop.service. Oct 2 19:37:43.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.125079 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:37:43.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.126039 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:37:43.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.127116 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:37:43.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.128208 systemd[1]: Reached target network-pre.target. Oct 2 19:37:43.130095 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:37:43.131896 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:37:43.132469 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:37:43.134303 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:37:43.135970 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:37:43.143397 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:37:43.145019 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:37:43.145794 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:37:43.147120 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:37:43.149395 systemd-journald[957]: Time spent on flushing to /var/log/journal/141a5a0ab5184baab53110b281bdaca7 is 14.923ms for 1165 entries. Oct 2 19:37:43.149395 systemd-journald[957]: System Journal (/var/log/journal/141a5a0ab5184baab53110b281bdaca7) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:37:43.177827 systemd-journald[957]: Received client request to flush runtime journal. Oct 2 19:37:43.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.150495 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:37:43.152476 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:37:43.179216 udevadm[993]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:37:43.154083 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:37:43.155388 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:37:43.157285 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:37:43.158847 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:37:43.159653 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:37:43.160353 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:37:43.169879 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:37:43.178812 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:37:43.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.184528 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:37:43.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.186217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:37:43.210528 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:37:43.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.609365 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:37:43.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.610000 audit: BPF prog-id=24 op=LOAD Oct 2 19:37:43.610000 audit: BPF prog-id=25 op=LOAD Oct 2 19:37:43.610000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:37:43.610000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:37:43.611604 systemd[1]: Starting systemd-udevd.service... Oct 2 19:37:43.630823 systemd-udevd[998]: Using default interface naming scheme 'v252'. Oct 2 19:37:43.647422 systemd[1]: Started systemd-udevd.service. Oct 2 19:37:43.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.648000 audit: BPF prog-id=26 op=LOAD Oct 2 19:37:43.649933 systemd[1]: Starting systemd-networkd.service... Oct 2 19:37:43.658000 audit: BPF prog-id=27 op=LOAD Oct 2 19:37:43.658000 audit: BPF prog-id=28 op=LOAD Oct 2 19:37:43.658000 audit: BPF prog-id=29 op=LOAD Oct 2 19:37:43.659567 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:37:43.676925 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:37:43.692087 systemd[1]: Started systemd-userdbd.service. Oct 2 19:37:43.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.701286 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:37:43.728211 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:37:43.731000 audit[1013]: AVC avc: denied { confidentiality } for pid=1013 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:37:43.731000 audit[1013]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560e2f6e61d0 a1=32194 a2=7f8a333e8bc5 a3=5 items=106 ppid=998 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:43.731000 audit: CWD cwd="/" Oct 2 19:37:43.731000 audit: PATH item=0 name=(null) inode=13282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=1 name=(null) inode=13890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=2 name=(null) inode=13282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=3 name=(null) inode=13891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=4 name=(null) inode=13282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=5 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=6 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=7 name=(null) inode=13893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=8 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=9 name=(null) inode=13894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=10 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=11 name=(null) inode=13895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=12 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=13 name=(null) inode=13896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=14 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=15 name=(null) inode=13897 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=16 name=(null) inode=13282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=17 name=(null) inode=13898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=18 name=(null) inode=13898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=19 name=(null) inode=13899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=20 name=(null) inode=13898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=21 name=(null) inode=13900 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=22 name=(null) inode=13898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=23 name=(null) inode=13901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=24 name=(null) inode=13898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=25 name=(null) inode=13902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=26 name=(null) inode=13898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=27 name=(null) inode=13903 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=28 name=(null) inode=13282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=29 name=(null) inode=13904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=30 name=(null) inode=13904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=31 name=(null) inode=13905 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=32 name=(null) inode=13904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=33 name=(null) inode=13906 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=34 name=(null) inode=13904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=35 name=(null) inode=13907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=36 name=(null) inode=13904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=37 name=(null) inode=13908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=38 name=(null) inode=13904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=39 name=(null) inode=13909 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=40 name=(null) inode=13282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=41 name=(null) inode=13910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=42 name=(null) inode=13910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=43 name=(null) inode=13911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=44 name=(null) inode=13910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=45 name=(null) inode=13912 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=46 name=(null) inode=13910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=47 name=(null) inode=13913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=48 name=(null) inode=13910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=49 name=(null) inode=13914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=50 name=(null) inode=13910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=51 name=(null) inode=13915 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=52 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=53 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=54 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=55 name=(null) inode=13917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=56 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=57 name=(null) inode=13918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=58 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=59 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=60 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=61 name=(null) inode=13920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=62 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=63 name=(null) inode=13921 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=64 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=65 name=(null) inode=13922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=66 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=67 name=(null) inode=13923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=68 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=69 name=(null) inode=13924 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=70 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=71 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=72 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=73 name=(null) inode=13926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=74 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=75 name=(null) inode=13927 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=76 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=77 name=(null) inode=13928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=78 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=79 name=(null) inode=13929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=80 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=81 name=(null) inode=13930 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=82 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=83 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=84 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=85 name=(null) inode=13932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=86 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=87 name=(null) inode=13933 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=88 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=89 name=(null) inode=13934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=90 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=91 name=(null) inode=13935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=92 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=93 name=(null) inode=13936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=94 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=95 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=96 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=97 name=(null) inode=13938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=98 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=99 name=(null) inode=13939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=100 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=101 name=(null) inode=13940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=102 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=103 name=(null) inode=13941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=104 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PATH item=105 name=(null) inode=13942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:37:43.731000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:37:43.750004 systemd-networkd[1005]: lo: Link UP Oct 2 19:37:43.750012 systemd-networkd[1005]: lo: Gained carrier Oct 2 19:37:43.750405 systemd-networkd[1005]: Enumeration completed Oct 2 19:37:43.750497 systemd-networkd[1005]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:37:43.750502 systemd[1]: Started systemd-networkd.service. Oct 2 19:37:43.751998 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:37:43.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.751984 systemd-networkd[1005]: eth0: Link UP Oct 2 19:37:43.751989 systemd-networkd[1005]: eth0: Gained carrier Oct 2 19:37:43.762270 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:37:43.763339 systemd-networkd[1005]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:37:43.773391 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Oct 2 19:37:43.773606 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:37:43.825492 kernel: kvm: Nested Virtualization enabled Oct 2 19:37:43.825580 kernel: SVM: kvm: Nested Paging enabled Oct 2 19:37:43.838187 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:37:43.856487 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:37:43.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.860187 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:37:43.873101 lvm[1034]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:37:43.900864 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:37:43.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.901646 systemd[1]: Reached target cryptsetup.target. Oct 2 19:37:43.903113 systemd[1]: Starting lvm2-activation.service... Oct 2 19:37:43.906020 lvm[1035]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:37:43.930925 systemd[1]: Finished lvm2-activation.service. Oct 2 19:37:43.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.931603 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:37:43.932185 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:37:43.932205 systemd[1]: Reached target local-fs.target. Oct 2 19:37:43.932752 systemd[1]: Reached target machines.target. Oct 2 19:37:43.934268 systemd[1]: Starting ldconfig.service... Oct 2 19:37:43.941544 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:37:43.941583 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:37:43.942317 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:37:43.943640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:37:43.945702 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:37:43.946412 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:37:43.946448 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:37:43.947137 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:37:43.948029 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1037 (bootctl) Oct 2 19:37:43.949065 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:37:43.954751 systemd-tmpfiles[1040]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:37:43.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:43.955686 systemd-tmpfiles[1040]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:37:43.957589 systemd-tmpfiles[1040]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:37:43.958539 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:37:43.987605 systemd-fsck[1045]: fsck.fat 4.2 (2021-01-31) Oct 2 19:37:43.987605 systemd-fsck[1045]: /dev/vda1: 790 files, 115092/258078 clusters Oct 2 19:37:43.989366 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:37:43.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.102815 systemd[1]: Mounting boot.mount... Oct 2 19:37:44.257562 systemd[1]: Mounted boot.mount. Oct 2 19:37:44.282532 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:37:44.284086 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:37:44.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.285223 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:37:44.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.339454 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:37:44.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.341731 systemd[1]: Starting audit-rules.service... Oct 2 19:37:44.343566 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:37:44.345377 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:37:44.346000 audit: BPF prog-id=30 op=LOAD Oct 2 19:37:44.348000 audit: BPF prog-id=31 op=LOAD Oct 2 19:37:44.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.347768 systemd[1]: Starting systemd-resolved.service... Oct 2 19:37:44.349774 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:37:44.351042 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:37:44.352088 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:37:44.353113 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:37:44.359000 audit[1057]: SYSTEM_BOOT pid=1057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.360715 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:37:44.372999 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:37:44.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:44.383453 augenrules[1069]: No rules Oct 2 19:37:44.383000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:37:44.383000 audit[1069]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd8fadb970 a2=420 a3=0 items=0 ppid=1049 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:44.383000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:37:44.384791 systemd[1]: Finished audit-rules.service. Oct 2 19:37:44.405896 systemd-resolved[1053]: Positive Trust Anchors: Oct 2 19:37:44.405909 systemd-resolved[1053]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:37:44.405940 systemd-resolved[1053]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:37:44.410957 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:37:44.411965 systemd[1]: Reached target time-set.target. Oct 2 19:37:44.412580 systemd-timesyncd[1054]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:37:44.412859 systemd-timesyncd[1054]: Initial clock synchronization to Mon 2023-10-02 19:37:44.236947 UTC. Oct 2 19:37:44.423291 systemd-resolved[1053]: Defaulting to hostname 'linux'. Oct 2 19:37:44.424720 systemd[1]: Started systemd-resolved.service. Oct 2 19:37:44.425326 systemd[1]: Reached target network.target. Oct 2 19:37:44.425832 systemd[1]: Reached target nss-lookup.target. Oct 2 19:37:44.435879 ldconfig[1036]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:37:44.752138 systemd[1]: Finished ldconfig.service. Oct 2 19:37:44.754929 systemd[1]: Starting systemd-update-done.service... Oct 2 19:37:44.761398 systemd[1]: Finished systemd-update-done.service. Oct 2 19:37:44.762102 systemd[1]: Reached target sysinit.target. Oct 2 19:37:44.762887 systemd[1]: Started motdgen.path. Oct 2 19:37:44.763392 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:37:44.764232 systemd[1]: Started logrotate.timer. Oct 2 19:37:44.764810 systemd[1]: Started mdadm.timer. Oct 2 19:37:44.765284 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:37:44.765897 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:37:44.765931 systemd[1]: Reached target paths.target. Oct 2 19:37:44.766592 systemd[1]: Reached target timers.target. Oct 2 19:37:44.767515 systemd[1]: Listening on dbus.socket. Oct 2 19:37:44.770611 systemd[1]: Starting docker.socket... Oct 2 19:37:44.773632 systemd[1]: Listening on sshd.socket. Oct 2 19:37:44.774491 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:37:44.775073 systemd[1]: Listening on docker.socket. Oct 2 19:37:44.775838 systemd[1]: Reached target sockets.target. Oct 2 19:37:44.776593 systemd[1]: Reached target basic.target. Oct 2 19:37:44.777270 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:37:44.777299 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:37:44.778698 systemd[1]: Starting containerd.service... Oct 2 19:37:44.780536 systemd[1]: Starting dbus.service... Oct 2 19:37:44.782074 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:37:44.783860 systemd[1]: Starting extend-filesystems.service... Oct 2 19:37:44.784695 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:37:44.785891 systemd[1]: Starting motdgen.service... Oct 2 19:37:44.787909 jq[1080]: false Oct 2 19:37:44.788362 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:37:44.793051 systemd[1]: Starting prepare-critools.service... Oct 2 19:37:44.795397 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:37:44.797065 extend-filesystems[1081]: Found sr0 Oct 2 19:37:44.797968 systemd[1]: Starting sshd-keygen.service... Oct 2 19:37:44.798191 extend-filesystems[1081]: Found vda Oct 2 19:37:44.799688 extend-filesystems[1081]: Found vda1 Oct 2 19:37:44.799688 extend-filesystems[1081]: Found vda2 Oct 2 19:37:44.799688 extend-filesystems[1081]: Found vda3 Oct 2 19:37:44.799688 extend-filesystems[1081]: Found usr Oct 2 19:37:44.799688 extend-filesystems[1081]: Found vda4 Oct 2 19:37:44.799688 extend-filesystems[1081]: Found vda6 Oct 2 19:37:44.799688 extend-filesystems[1081]: Found vda7 Oct 2 19:37:44.799688 extend-filesystems[1081]: Found vda9 Oct 2 19:37:44.799688 extend-filesystems[1081]: Checking size of /dev/vda9 Oct 2 19:37:44.809230 dbus-daemon[1079]: [system] SELinux support is enabled Oct 2 19:37:44.812369 systemd[1]: Starting systemd-logind.service... Oct 2 19:37:44.813222 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:37:44.813306 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:37:44.814042 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:37:44.815086 systemd[1]: Starting update-engine.service... Oct 2 19:37:44.817090 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:37:44.818729 systemd[1]: Started dbus.service. Oct 2 19:37:44.821158 jq[1104]: true Oct 2 19:37:44.825787 extend-filesystems[1081]: Old size kept for /dev/vda9 Oct 2 19:37:44.823044 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:37:44.823309 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:37:44.823627 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:37:44.823791 systemd[1]: Finished motdgen.service. Oct 2 19:37:44.825250 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:37:44.825414 systemd[1]: Finished extend-filesystems.service. Oct 2 19:37:44.830438 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:37:44.830657 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:37:44.839972 tar[1106]: ./ Oct 2 19:37:44.839972 tar[1106]: ./macvlan Oct 2 19:37:44.840738 jq[1111]: true Oct 2 19:37:44.843255 tar[1107]: crictl Oct 2 19:37:44.842542 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:37:44.842571 systemd[1]: Reached target system-config.target. Oct 2 19:37:44.844306 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:37:44.844343 systemd[1]: Reached target user-config.target. Oct 2 19:37:44.880755 env[1112]: time="2023-10-02T19:37:44.880692905Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:37:44.901716 tar[1106]: ./static Oct 2 19:37:44.911909 update_engine[1103]: I1002 19:37:44.911337 1103 main.cc:92] Flatcar Update Engine starting Oct 2 19:37:44.912268 systemd-logind[1100]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:37:44.912302 systemd-logind[1100]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:37:44.913317 systemd-logind[1100]: New seat seat0. Oct 2 19:37:44.915599 systemd[1]: Started systemd-logind.service. Oct 2 19:37:44.916076 update_engine[1103]: I1002 19:37:44.916036 1103 update_check_scheduler.cc:74] Next update check in 7m57s Oct 2 19:37:44.916647 systemd[1]: Started update-engine.service. Oct 2 19:37:44.919488 systemd[1]: Started locksmithd.service. Oct 2 19:37:44.922639 env[1112]: time="2023-10-02T19:37:44.922508670Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:37:44.922774 env[1112]: time="2023-10-02T19:37:44.922748490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:37:44.924030 env[1112]: time="2023-10-02T19:37:44.923964210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:37:44.924030 env[1112]: time="2023-10-02T19:37:44.923993675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:37:44.924346 env[1112]: time="2023-10-02T19:37:44.924278279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:37:44.924346 env[1112]: time="2023-10-02T19:37:44.924302805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:37:44.924346 env[1112]: time="2023-10-02T19:37:44.924315749Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:37:44.924346 env[1112]: time="2023-10-02T19:37:44.924327992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:37:44.924451 env[1112]: time="2023-10-02T19:37:44.924406199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:37:44.924632 env[1112]: time="2023-10-02T19:37:44.924603980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:37:44.924739 env[1112]: time="2023-10-02T19:37:44.924712263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:37:44.924739 env[1112]: time="2023-10-02T19:37:44.924731249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:37:44.924798 env[1112]: time="2023-10-02T19:37:44.924777966Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:37:44.924798 env[1112]: time="2023-10-02T19:37:44.924789318Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:37:44.936581 tar[1106]: ./vlan Oct 2 19:37:44.969595 tar[1106]: ./portmap Oct 2 19:37:44.982950 bash[1132]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:37:44.983740 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:37:44.987046 env[1112]: time="2023-10-02T19:37:44.986996440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:37:44.987118 env[1112]: time="2023-10-02T19:37:44.987056724Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:37:44.987118 env[1112]: time="2023-10-02T19:37:44.987071862Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:37:44.987118 env[1112]: time="2023-10-02T19:37:44.987112839Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:37:44.987210 env[1112]: time="2023-10-02T19:37:44.987133147Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:37:44.987210 env[1112]: time="2023-10-02T19:37:44.987155679Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:37:44.987210 env[1112]: time="2023-10-02T19:37:44.987191005Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:37:44.987293 env[1112]: time="2023-10-02T19:37:44.987213738Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:37:44.987964 env[1112]: time="2023-10-02T19:37:44.987231672Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:37:44.988027 env[1112]: time="2023-10-02T19:37:44.987975597Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:37:44.988027 env[1112]: time="2023-10-02T19:37:44.988014911Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:37:44.988083 env[1112]: time="2023-10-02T19:37:44.988032704Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:37:44.988242 env[1112]: time="2023-10-02T19:37:44.988203825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:37:44.988340 env[1112]: time="2023-10-02T19:37:44.988302961Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:37:44.988665 env[1112]: time="2023-10-02T19:37:44.988637348Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:37:44.988728 env[1112]: time="2023-10-02T19:37:44.988675209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.988728 env[1112]: time="2023-10-02T19:37:44.988694195Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:37:44.988782 env[1112]: time="2023-10-02T19:37:44.988751272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.988782 env[1112]: time="2023-10-02T19:37:44.988767392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.988839 env[1112]: time="2023-10-02T19:37:44.988781799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.988839 env[1112]: time="2023-10-02T19:37:44.988796296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.988839 env[1112]: time="2023-10-02T19:37:44.988810653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.988910 env[1112]: time="2023-10-02T19:37:44.988844036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.988910 env[1112]: time="2023-10-02T19:37:44.988859054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.988910 env[1112]: time="2023-10-02T19:37:44.988873121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.988910 env[1112]: time="2023-10-02T19:37:44.988888008Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:37:44.989067 env[1112]: time="2023-10-02T19:37:44.989038952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.989129 env[1112]: time="2023-10-02T19:37:44.989068667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.989129 env[1112]: time="2023-10-02T19:37:44.989083425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.989129 env[1112]: time="2023-10-02T19:37:44.989109394Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:37:44.989232 env[1112]: time="2023-10-02T19:37:44.989132667Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:37:44.989232 env[1112]: time="2023-10-02T19:37:44.989148998Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:37:44.989232 env[1112]: time="2023-10-02T19:37:44.989197689Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:37:44.989310 env[1112]: time="2023-10-02T19:37:44.989247262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:37:44.989589 env[1112]: time="2023-10-02T19:37:44.989511288Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:37:44.989589 env[1112]: time="2023-10-02T19:37:44.989591789Z" level=info msg="Connect containerd service" Oct 2 19:37:44.991477 env[1112]: time="2023-10-02T19:37:44.989639849Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:37:44.991619 env[1112]: time="2023-10-02T19:37:44.991590277Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:37:44.991732 env[1112]: time="2023-10-02T19:37:44.991701786Z" level=info msg="Start subscribing containerd event" Oct 2 19:37:44.991796 env[1112]: time="2023-10-02T19:37:44.991744376Z" level=info msg="Start recovering state" Oct 2 19:37:44.991796 env[1112]: time="2023-10-02T19:37:44.991792436Z" level=info msg="Start event monitor" Oct 2 19:37:44.991860 env[1112]: time="2023-10-02T19:37:44.991801774Z" level=info msg="Start snapshots syncer" Oct 2 19:37:44.991860 env[1112]: time="2023-10-02T19:37:44.991809919Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:37:44.991860 env[1112]: time="2023-10-02T19:37:44.991816772Z" level=info msg="Start streaming server" Oct 2 19:37:44.992143 env[1112]: time="2023-10-02T19:37:44.992120161Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:37:44.992208 env[1112]: time="2023-10-02T19:37:44.992156660Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:37:44.992378 systemd[1]: Started containerd.service. Oct 2 19:37:44.998342 env[1112]: time="2023-10-02T19:37:44.992253211Z" level=info msg="containerd successfully booted in 0.112243s" Oct 2 19:37:45.005320 tar[1106]: ./host-local Oct 2 19:37:45.033243 locksmithd[1136]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:37:45.036349 tar[1106]: ./vrf Oct 2 19:37:45.068199 tar[1106]: ./bridge Oct 2 19:37:45.109925 tar[1106]: ./tuning Oct 2 19:37:45.140251 tar[1106]: ./firewall Oct 2 19:37:45.180402 tar[1106]: ./host-device Oct 2 19:37:45.215392 tar[1106]: ./sbr Oct 2 19:37:45.244644 systemd-networkd[1005]: eth0: Gained IPv6LL Oct 2 19:37:45.247599 tar[1106]: ./loopback Oct 2 19:37:45.277430 tar[1106]: ./dhcp Oct 2 19:37:45.320861 systemd[1]: Finished prepare-critools.service. Oct 2 19:37:45.362557 tar[1106]: ./ptp Oct 2 19:37:45.392214 tar[1106]: ./ipvlan Oct 2 19:37:45.420154 tar[1106]: ./bandwidth Oct 2 19:37:45.458188 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:37:45.977067 sshd_keygen[1096]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:37:45.993548 systemd[1]: Finished sshd-keygen.service. Oct 2 19:37:45.995413 systemd[1]: Starting issuegen.service... Oct 2 19:37:45.999715 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:37:45.999828 systemd[1]: Finished issuegen.service. Oct 2 19:37:46.001609 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:37:46.006638 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:37:46.008323 systemd[1]: Started getty@tty1.service. Oct 2 19:37:46.009788 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:37:46.010669 systemd[1]: Reached target getty.target. Oct 2 19:37:46.011456 systemd[1]: Reached target multi-user.target. Oct 2 19:37:46.013063 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:37:46.021661 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:37:46.021784 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:37:46.022720 systemd[1]: Startup finished in 493ms (kernel) + 6.343s (initrd) + 8.022s (userspace) = 14.859s. Oct 2 19:37:47.064549 systemd[1]: Created slice system-sshd.slice. Oct 2 19:37:47.065559 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:49288.service. Oct 2 19:37:47.207761 sshd[1161]: Accepted publickey for core from 10.0.0.1 port 49288 ssh2: RSA SHA256:RFT1jC4fVREjPwURffbLGeUL4d81gAjV9CJ7mooV97Q Oct 2 19:37:47.209219 sshd[1161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:37:47.219860 systemd-logind[1100]: New session 1 of user core. Oct 2 19:37:47.220892 systemd[1]: Created slice user-500.slice. Oct 2 19:37:47.222085 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:37:47.230213 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:37:47.231824 systemd[1]: Starting user@500.service... Oct 2 19:37:47.235297 (systemd)[1164]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:37:47.308260 systemd[1164]: Queued start job for default target default.target. Oct 2 19:37:47.308673 systemd[1164]: Reached target paths.target. Oct 2 19:37:47.308693 systemd[1164]: Reached target sockets.target. Oct 2 19:37:47.308703 systemd[1164]: Reached target timers.target. Oct 2 19:37:47.308713 systemd[1164]: Reached target basic.target. Oct 2 19:37:47.308747 systemd[1164]: Reached target default.target. Oct 2 19:37:47.308767 systemd[1164]: Startup finished in 66ms. Oct 2 19:37:47.308840 systemd[1]: Started user@500.service. Oct 2 19:37:47.309783 systemd[1]: Started session-1.scope. Oct 2 19:37:47.359291 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:49296.service. Oct 2 19:37:47.394783 sshd[1173]: Accepted publickey for core from 10.0.0.1 port 49296 ssh2: RSA SHA256:RFT1jC4fVREjPwURffbLGeUL4d81gAjV9CJ7mooV97Q Oct 2 19:37:47.396237 sshd[1173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:37:47.400049 systemd-logind[1100]: New session 2 of user core. Oct 2 19:37:47.400797 systemd[1]: Started session-2.scope. Oct 2 19:37:47.457093 sshd[1173]: pam_unix(sshd:session): session closed for user core Oct 2 19:37:47.460679 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:49296.service: Deactivated successfully. Oct 2 19:37:47.461449 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:37:47.462099 systemd-logind[1100]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:37:47.463688 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:49310.service. Oct 2 19:37:47.465101 systemd-logind[1100]: Removed session 2. Oct 2 19:37:47.509105 sshd[1179]: Accepted publickey for core from 10.0.0.1 port 49310 ssh2: RSA SHA256:RFT1jC4fVREjPwURffbLGeUL4d81gAjV9CJ7mooV97Q Oct 2 19:37:47.510307 sshd[1179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:37:47.513542 systemd-logind[1100]: New session 3 of user core. Oct 2 19:37:47.514227 systemd[1]: Started session-3.scope. Oct 2 19:37:47.563283 sshd[1179]: pam_unix(sshd:session): session closed for user core Oct 2 19:37:47.566222 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:49310.service: Deactivated successfully. Oct 2 19:37:47.566847 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:37:47.567426 systemd-logind[1100]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:37:47.568497 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:49314.service. Oct 2 19:37:47.569386 systemd-logind[1100]: Removed session 3. Oct 2 19:37:47.609257 sshd[1185]: Accepted publickey for core from 10.0.0.1 port 49314 ssh2: RSA SHA256:RFT1jC4fVREjPwURffbLGeUL4d81gAjV9CJ7mooV97Q Oct 2 19:37:47.610241 sshd[1185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:37:47.613697 systemd-logind[1100]: New session 4 of user core. Oct 2 19:37:47.614547 systemd[1]: Started session-4.scope. Oct 2 19:37:47.666650 sshd[1185]: pam_unix(sshd:session): session closed for user core Oct 2 19:37:47.670050 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:49314.service: Deactivated successfully. Oct 2 19:37:47.670702 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:37:47.671362 systemd-logind[1100]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:37:47.672481 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:49320.service. Oct 2 19:37:47.673312 systemd-logind[1100]: Removed session 4. Oct 2 19:37:47.707737 sshd[1191]: Accepted publickey for core from 10.0.0.1 port 49320 ssh2: RSA SHA256:RFT1jC4fVREjPwURffbLGeUL4d81gAjV9CJ7mooV97Q Oct 2 19:37:47.708726 sshd[1191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:37:47.711720 systemd-logind[1100]: New session 5 of user core. Oct 2 19:37:47.712509 systemd[1]: Started session-5.scope. Oct 2 19:37:47.768826 sudo[1194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:37:47.768982 sudo[1194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:37:47.778341 dbus-daemon[1079]: \xd0mNpbU: received setenforce notice (enforcing=1374083376) Oct 2 19:37:47.780452 sudo[1194]: pam_unix(sudo:session): session closed for user root Oct 2 19:37:47.782261 sshd[1191]: pam_unix(sshd:session): session closed for user core Oct 2 19:37:47.784941 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:49320.service: Deactivated successfully. Oct 2 19:37:47.785653 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:37:47.786335 systemd-logind[1100]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:37:47.787661 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:49328.service. Oct 2 19:37:47.788326 systemd-logind[1100]: Removed session 5. Oct 2 19:37:47.822285 sshd[1198]: Accepted publickey for core from 10.0.0.1 port 49328 ssh2: RSA SHA256:RFT1jC4fVREjPwURffbLGeUL4d81gAjV9CJ7mooV97Q Oct 2 19:37:47.823440 sshd[1198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:37:47.826513 systemd-logind[1100]: New session 6 of user core. Oct 2 19:37:47.827308 systemd[1]: Started session-6.scope. Oct 2 19:37:47.876963 sudo[1202]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:37:47.877110 sudo[1202]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:37:47.879092 sudo[1202]: pam_unix(sudo:session): session closed for user root Oct 2 19:37:47.882783 sudo[1201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:37:47.882954 sudo[1201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:37:47.889819 systemd[1]: Stopping audit-rules.service... Oct 2 19:37:47.889000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:37:47.889000 audit[1205]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc3706d270 a2=420 a3=0 items=0 ppid=1 pid=1205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:47.889000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:37:47.890698 auditctl[1205]: No rules Oct 2 19:37:47.890880 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:37:47.891040 systemd[1]: Stopped audit-rules.service. Oct 2 19:37:47.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:47.892451 systemd[1]: Starting audit-rules.service... Oct 2 19:37:47.905863 augenrules[1222]: No rules Oct 2 19:37:47.906316 systemd[1]: Finished audit-rules.service. Oct 2 19:37:47.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:47.907055 sudo[1201]: pam_unix(sudo:session): session closed for user root Oct 2 19:37:47.905000 audit[1201]: USER_END pid=1201 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:37:47.905000 audit[1201]: CRED_DISP pid=1201 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:37:47.908203 sshd[1198]: pam_unix(sshd:session): session closed for user core Oct 2 19:37:47.908000 audit[1198]: USER_END pid=1198 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:37:47.908000 audit[1198]: CRED_DISP pid=1198 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:37:47.911746 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:49340.service. Oct 2 19:37:47.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:49340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:47.912157 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:49328.service: Deactivated successfully. Oct 2 19:37:47.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.12:22-10.0.0.1:49328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:47.912674 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:37:47.913112 systemd-logind[1100]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:37:47.913755 systemd-logind[1100]: Removed session 6. Oct 2 19:37:47.944000 audit[1227]: USER_ACCT pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:37:47.946089 sshd[1227]: Accepted publickey for core from 10.0.0.1 port 49340 ssh2: RSA SHA256:RFT1jC4fVREjPwURffbLGeUL4d81gAjV9CJ7mooV97Q Oct 2 19:37:47.945000 audit[1227]: CRED_ACQ pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:37:47.946000 audit[1227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf40e7260 a2=3 a3=0 items=0 ppid=1 pid=1227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:47.946000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:37:47.947425 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:37:47.950377 systemd-logind[1100]: New session 7 of user core. Oct 2 19:37:47.951052 systemd[1]: Started session-7.scope. Oct 2 19:37:47.953000 audit[1227]: USER_START pid=1227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:37:47.954000 audit[1230]: CRED_ACQ pid=1230 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:37:47.998000 audit[1231]: USER_ACCT pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.000023 sudo[1231]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:37:48.000187 sudo[1231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:37:48.000594 kernel: kauditd_printk_skb: 206 callbacks suppressed Oct 2 19:37:48.000625 kernel: audit: type=1101 audit(1696275467.998:181): pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:37:47.998000 audit[1231]: CRED_REFR pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.005256 kernel: audit: type=1110 audit(1696275467.998:182): pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.000000 audit[1231]: USER_START pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.007972 kernel: audit: type=1105 audit(1696275468.000:183): pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.501771 systemd[1]: Reloading. Oct 2 19:37:48.562656 /usr/lib/systemd/system-generators/torcx-generator[1261]: time="2023-10-02T19:37:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:37:48.562680 /usr/lib/systemd/system-generators/torcx-generator[1261]: time="2023-10-02T19:37:48Z" level=info msg="torcx already run" Oct 2 19:37:48.648311 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:37:48.648330 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:37:48.673621 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:37:48.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.741535 kernel: audit: type=1400 audit(1696275468.735:184): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.741576 kernel: audit: type=1400 audit(1696275468.735:185): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.741591 kernel: audit: type=1400 audit(1696275468.735:186): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.743869 kernel: audit: type=1400 audit(1696275468.735:187): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.748653 kernel: audit: type=1400 audit(1696275468.735:188): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.748692 kernel: audit: type=1400 audit(1696275468.735:189): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.753486 kernel: audit: type=1400 audit(1696275468.735:190): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.740000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.740000 audit: BPF prog-id=37 op=LOAD Oct 2 19:37:48.740000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:37:48.740000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.740000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.740000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.740000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.740000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.740000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.740000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.740000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.742000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.742000 audit: BPF prog-id=38 op=LOAD Oct 2 19:37:48.742000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.742000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.742000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.745000 audit: BPF prog-id=39 op=LOAD Oct 2 19:37:48.745000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:37:48.745000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:37:48.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.747000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.747000 audit: BPF prog-id=40 op=LOAD Oct 2 19:37:48.747000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.747000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.747000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.747000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.747000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.747000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.747000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.747000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.749000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.749000 audit: BPF prog-id=41 op=LOAD Oct 2 19:37:48.749000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:37:48.749000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:37:48.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.750000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit: BPF prog-id=42 op=LOAD Oct 2 19:37:48.752000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.752000 audit: BPF prog-id=43 op=LOAD Oct 2 19:37:48.752000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.755000 audit: BPF prog-id=44 op=LOAD Oct 2 19:37:48.755000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit: BPF prog-id=45 op=LOAD Oct 2 19:37:48.756000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit: BPF prog-id=46 op=LOAD Oct 2 19:37:48.756000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit: BPF prog-id=47 op=LOAD Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.756000 audit: BPF prog-id=48 op=LOAD Oct 2 19:37:48.756000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:37:48.756000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:37:48.757000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.757000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.757000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.757000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.757000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.757000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.757000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.757000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.757000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit: BPF prog-id=49 op=LOAD Oct 2 19:37:48.758000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit: BPF prog-id=50 op=LOAD Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:48.758000 audit: BPF prog-id=51 op=LOAD Oct 2 19:37:48.758000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:37:48.758000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:37:48.765471 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:37:48.771320 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:37:48.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.771716 systemd[1]: Reached target network-online.target. Oct 2 19:37:48.773089 systemd[1]: Started kubelet.service. Oct 2 19:37:48.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.783504 systemd[1]: Starting coreos-metadata.service... Oct 2 19:37:48.791104 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:37:48.791278 systemd[1]: Finished coreos-metadata.service. Oct 2 19:37:48.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:48.824910 kubelet[1302]: E1002 19:37:48.824808 1302 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Oct 2 19:37:48.826808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:37:48.826947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:37:48.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:37:49.058134 systemd[1]: Stopped kubelet.service. Oct 2 19:37:49.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.073447 systemd[1]: Reloading. Oct 2 19:37:49.130887 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2023-10-02T19:37:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:37:49.130918 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2023-10-02T19:37:49Z" level=info msg="torcx already run" Oct 2 19:37:49.196390 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:37:49.196413 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:37:49.216765 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit: BPF prog-id=52 op=LOAD Oct 2 19:37:49.274000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit: BPF prog-id=53 op=LOAD Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.275000 audit: BPF prog-id=54 op=LOAD Oct 2 19:37:49.275000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:37:49.275000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:37:49.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit: BPF prog-id=55 op=LOAD Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit: BPF prog-id=56 op=LOAD Oct 2 19:37:49.276000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:37:49.276000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.276000 audit: BPF prog-id=57 op=LOAD Oct 2 19:37:49.276000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.277000 audit: BPF prog-id=58 op=LOAD Oct 2 19:37:49.277000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.279000 audit: BPF prog-id=59 op=LOAD Oct 2 19:37:49.279000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.280000 audit: BPF prog-id=60 op=LOAD Oct 2 19:37:49.280000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit: BPF prog-id=61 op=LOAD Oct 2 19:37:49.281000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit: BPF prog-id=62 op=LOAD Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.281000 audit: BPF prog-id=63 op=LOAD Oct 2 19:37:49.281000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:37:49.281000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:37:49.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.282000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit: BPF prog-id=64 op=LOAD Oct 2 19:37:49.283000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit: BPF prog-id=65 op=LOAD Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.283000 audit: BPF prog-id=66 op=LOAD Oct 2 19:37:49.283000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:37:49.283000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:37:49.296186 systemd[1]: Started kubelet.service. Oct 2 19:37:49.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:37:49.334397 kubelet[1409]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:37:49.334397 kubelet[1409]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:37:49.334397 kubelet[1409]: I1002 19:37:49.334358 1409 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:37:49.336429 kubelet[1409]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:37:49.336429 kubelet[1409]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:37:49.786319 kubelet[1409]: I1002 19:37:49.786212 1409 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Oct 2 19:37:49.786319 kubelet[1409]: I1002 19:37:49.786246 1409 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:37:49.786503 kubelet[1409]: I1002 19:37:49.786488 1409 server.go:836] "Client rotation is on, will bootstrap in background" Oct 2 19:37:49.788485 kubelet[1409]: I1002 19:37:49.788456 1409 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:37:49.792466 kubelet[1409]: I1002 19:37:49.792424 1409 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:37:49.792727 kubelet[1409]: I1002 19:37:49.792678 1409 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:37:49.792780 kubelet[1409]: I1002 19:37:49.792762 1409 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:37:49.792904 kubelet[1409]: I1002 19:37:49.792782 1409 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:37:49.792904 kubelet[1409]: I1002 19:37:49.792796 1409 container_manager_linux.go:308] "Creating device plugin manager" Oct 2 19:37:49.792982 kubelet[1409]: I1002 19:37:49.792908 1409 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:37:49.795716 kubelet[1409]: I1002 19:37:49.795698 1409 kubelet.go:398] "Attempting to sync node with API server" Oct 2 19:37:49.795775 kubelet[1409]: I1002 19:37:49.795724 1409 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:37:49.795775 kubelet[1409]: I1002 19:37:49.795750 1409 kubelet.go:297] "Adding apiserver pod source" Oct 2 19:37:49.795775 kubelet[1409]: I1002 19:37:49.795767 1409 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:37:49.795871 kubelet[1409]: E1002 19:37:49.795847 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:49.795902 kubelet[1409]: E1002 19:37:49.795893 1409 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:49.796706 kubelet[1409]: I1002 19:37:49.796690 1409 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:37:49.797016 kubelet[1409]: W1002 19:37:49.796996 1409 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:37:49.797439 kubelet[1409]: I1002 19:37:49.797416 1409 server.go:1186] "Started kubelet" Oct 2 19:37:49.797928 kubelet[1409]: I1002 19:37:49.797899 1409 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:37:49.797000 audit[1409]: AVC avc: denied { mac_admin } for pid=1409 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.797000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:37:49.797000 audit[1409]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b84b10 a1=c000b5e7f8 a2=c000b84ae0 a3=25 items=0 ppid=1 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.797000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:37:49.797000 audit[1409]: AVC avc: denied { mac_admin } for pid=1409 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.797000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:37:49.797000 audit[1409]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b8c3e0 a1=c000b5e810 a2=c000b84ba0 a3=25 items=0 ppid=1 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.797000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:37:49.798931 kubelet[1409]: I1002 19:37:49.798469 1409 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:37:49.798931 kubelet[1409]: I1002 19:37:49.798507 1409 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:37:49.798931 kubelet[1409]: I1002 19:37:49.798634 1409 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:37:49.798931 kubelet[1409]: I1002 19:37:49.798885 1409 server.go:451] "Adding debug handlers to kubelet server" Oct 2 19:37:49.799335 kubelet[1409]: E1002 19:37:49.798470 1409 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:37:49.799439 kubelet[1409]: E1002 19:37:49.799418 1409 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:37:49.799528 kubelet[1409]: I1002 19:37:49.799465 1409 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:37:49.799884 kubelet[1409]: I1002 19:37:49.799480 1409 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:37:49.813225 kubelet[1409]: E1002 19:37:49.813186 1409 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:37:49.813356 kubelet[1409]: W1002 19:37:49.813261 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:37:49.813356 kubelet[1409]: E1002 19:37:49.813283 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:37:49.813434 kubelet[1409]: E1002 19:37:49.813344 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d727b73ad", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 797389229, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 797389229, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:49.813584 kubelet[1409]: W1002 19:37:49.813550 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:37:49.813584 kubelet[1409]: E1002 19:37:49.813582 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:37:49.813654 kubelet[1409]: W1002 19:37:49.813627 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:37:49.813654 kubelet[1409]: E1002 19:37:49.813637 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:37:49.815605 kubelet[1409]: E1002 19:37:49.815523 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d729a0daf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 799394735, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 799394735, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:49.826742 kubelet[1409]: I1002 19:37:49.826714 1409 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:37:49.826742 kubelet[1409]: I1002 19:37:49.826734 1409 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:37:49.826742 kubelet[1409]: I1002 19:37:49.826754 1409 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:37:49.827448 kubelet[1409]: E1002 19:37:49.827340 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b8272", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825704562, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825704562, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:49.828798 kubelet[1409]: E1002 19:37:49.828728 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b9fc4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825712068, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825712068, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:49.830449 kubelet[1409]: I1002 19:37:49.830432 1409 policy_none.go:49] "None policy: Start" Oct 2 19:37:49.831095 kubelet[1409]: E1002 19:37:49.830881 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742bb420", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825717280, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825717280, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:49.831225 kubelet[1409]: I1002 19:37:49.831135 1409 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:37:49.831225 kubelet[1409]: I1002 19:37:49.831154 1409 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:37:49.833000 audit[1425]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.833000 audit[1425]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7690a690 a2=0 a3=7ffc7690a67c items=0 ppid=1409 pid=1425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:37:49.834000 audit[1428]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.834000 audit[1428]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff9bca8ac0 a2=0 a3=7fff9bca8aac items=0 ppid=1409 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.834000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:37:49.836003 systemd[1]: Created slice kubepods.slice. Oct 2 19:37:49.840744 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:37:49.842964 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:37:49.851959 kubelet[1409]: I1002 19:37:49.851927 1409 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:37:49.852092 kubelet[1409]: I1002 19:37:49.852062 1409 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:37:49.850000 audit[1409]: AVC avc: denied { mac_admin } for pid=1409 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:37:49.850000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:37:49.850000 audit[1409]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000fa33b0 a1=c00103bde8 a2=c000fa3380 a3=25 items=0 ppid=1 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.850000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:37:49.852817 kubelet[1409]: E1002 19:37:49.852801 1409 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:37:49.852984 kubelet[1409]: I1002 19:37:49.852962 1409 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:37:49.854459 kubelet[1409]: E1002 19:37:49.854380 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d75ce37a1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 853144993, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 853144993, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:49.836000 audit[1430]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.836000 audit[1430]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffce5c25700 a2=0 a3=7ffce5c256ec items=0 ppid=1409 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:37:49.855000 audit[1435]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.855000 audit[1435]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffff180a0b0 a2=0 a3=7ffff180a09c items=0 ppid=1409 pid=1435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:37:49.890000 audit[1440]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.890000 audit[1440]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd6d506470 a2=0 a3=7ffd6d50645c items=0 ppid=1409 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:37:49.891000 audit[1441]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.891000 audit[1441]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd7217d640 a2=0 a3=7ffd7217d62c items=0 ppid=1409 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:37:49.895000 audit[1444]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.895000 audit[1444]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff7f8a7740 a2=0 a3=7fff7f8a772c items=0 ppid=1409 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:37:49.900101 kubelet[1409]: I1002 19:37:49.900057 1409 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:37:49.898000 audit[1447]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.898000 audit[1447]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffd743436b0 a2=0 a3=7ffd7434369c items=0 ppid=1409 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:37:49.899000 audit[1448]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.899000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc033cd0c0 a2=0 a3=7ffc033cd0ac items=0 ppid=1409 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:37:49.901323 kubelet[1409]: E1002 19:37:49.901293 1409 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:37:49.901713 kubelet[1409]: E1002 19:37:49.901648 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b8272", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825704562, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 900011536, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b8272" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:49.900000 audit[1449]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.900000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6d963050 a2=0 a3=7ffc6d96303c items=0 ppid=1409 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.900000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:37:49.902621 kubelet[1409]: E1002 19:37:49.902555 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b9fc4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825712068, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 900020120, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b9fc4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:49.903329 kubelet[1409]: E1002 19:37:49.903268 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742bb420", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825717280, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 900023602, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742bb420" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:49.902000 audit[1451]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.902000 audit[1451]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcda95acf0 a2=0 a3=7ffcda95acdc items=0 ppid=1409 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.902000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:37:49.904000 audit[1453]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.904000 audit[1453]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffa39b2ed0 a2=0 a3=7fffa39b2ebc items=0 ppid=1409 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:37:49.924000 audit[1456]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.924000 audit[1456]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd076e97b0 a2=0 a3=7ffd076e979c items=0 ppid=1409 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.924000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:37:49.926000 audit[1458]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.926000 audit[1458]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fff086fea30 a2=0 a3=7fff086fea1c items=0 ppid=1409 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:37:49.931000 audit[1461]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.931000 audit[1461]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7fffdac8c330 a2=0 a3=7fffdac8c31c items=0 ppid=1409 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:37:49.933309 kubelet[1409]: I1002 19:37:49.933279 1409 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:37:49.932000 audit[1462]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.932000 audit[1462]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd74eb1de0 a2=0 a3=7ffd74eb1dcc items=0 ppid=1409 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:37:49.932000 audit[1463]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.932000 audit[1463]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc9822e90 a2=0 a3=7ffdc9822e7c items=0 ppid=1409 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:37:49.933000 audit[1465]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_chain pid=1465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.933000 audit[1465]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd01aff1b0 a2=0 a3=7ffd01aff19c items=0 ppid=1409 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:37:49.933000 audit[1464]: NETFILTER_CFG table=nat:20 family=10 entries=2 op=nft_register_chain pid=1464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.933000 audit[1464]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffb286abd0 a2=0 a3=7fffb286abbc items=0 ppid=1409 pid=1464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:37:49.934000 audit[1466]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:37:49.934000 audit[1466]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe7e7e470 a2=0 a3=7fffe7e7e45c items=0 ppid=1409 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:37:49.935000 audit[1468]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.935000 audit[1468]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdac43f050 a2=0 a3=7ffdac43f03c items=0 ppid=1409 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:37:49.936000 audit[1469]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.936000 audit[1469]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffce6c02a40 a2=0 a3=7ffce6c02a2c items=0 ppid=1409 pid=1469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:37:49.937000 audit[1471]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.937000 audit[1471]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe7d2579f0 a2=0 a3=7ffe7d2579dc items=0 ppid=1409 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:37:49.938000 audit[1472]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.938000 audit[1472]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff3a41c620 a2=0 a3=7fff3a41c60c items=0 ppid=1409 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:37:49.939000 audit[1473]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.939000 audit[1473]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4f09c550 a2=0 a3=7ffe4f09c53c items=0 ppid=1409 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:37:49.940000 audit[1475]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.940000 audit[1475]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc374514f0 a2=0 a3=7ffc374514dc items=0 ppid=1409 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:37:49.942000 audit[1477]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.942000 audit[1477]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe1d4de9e0 a2=0 a3=7ffe1d4de9cc items=0 ppid=1409 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:37:49.943000 audit[1479]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.943000 audit[1479]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc6482f030 a2=0 a3=7ffc6482f01c items=0 ppid=1409 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:37:49.945000 audit[1481]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.945000 audit[1481]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd8d7b1b10 a2=0 a3=7ffd8d7b1afc items=0 ppid=1409 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:37:49.947000 audit[1483]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.947000 audit[1483]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fff017831c0 a2=0 a3=7fff017831ac items=0 ppid=1409 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:37:49.948665 kubelet[1409]: I1002 19:37:49.948640 1409 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:37:49.948665 kubelet[1409]: I1002 19:37:49.948661 1409 status_manager.go:176] "Starting to sync pod status with apiserver" Oct 2 19:37:49.948719 kubelet[1409]: I1002 19:37:49.948678 1409 kubelet.go:2113] "Starting kubelet main sync loop" Oct 2 19:37:49.948741 kubelet[1409]: E1002 19:37:49.948729 1409 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:37:49.948000 audit[1484]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.948000 audit[1484]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec349ade0 a2=0 a3=7ffec349adcc items=0 ppid=1409 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:37:49.948000 audit[1485]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.948000 audit[1485]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2c9c6ab0 a2=0 a3=7ffe2c9c6a9c items=0 ppid=1409 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:37:49.950089 kubelet[1409]: W1002 19:37:49.950010 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:37:49.950089 kubelet[1409]: E1002 19:37:49.950036 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:37:49.949000 audit[1486]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:37:49.949000 audit[1486]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9d174bd0 a2=0 a3=7ffe9d174bbc items=0 ppid=1409 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:37:49.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:37:50.014419 kubelet[1409]: E1002 19:37:50.014326 1409 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:37:50.102447 kubelet[1409]: I1002 19:37:50.102340 1409 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:37:50.103690 kubelet[1409]: E1002 19:37:50.103643 1409 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:37:50.103811 kubelet[1409]: E1002 19:37:50.103702 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b8272", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825704562, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 50, 102297631, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b8272" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:50.104474 kubelet[1409]: E1002 19:37:50.104431 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b9fc4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825712068, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 50, 102309449, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b9fc4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:50.199234 kubelet[1409]: E1002 19:37:50.199066 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742bb420", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825717280, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 50, 102312758, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742bb420" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:50.415968 kubelet[1409]: E1002 19:37:50.415859 1409 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:37:50.504861 kubelet[1409]: I1002 19:37:50.504822 1409 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:37:50.506028 kubelet[1409]: E1002 19:37:50.506003 1409 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:37:50.506114 kubelet[1409]: E1002 19:37:50.506013 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b8272", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825704562, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 50, 504778863, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b8272" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:50.599870 kubelet[1409]: E1002 19:37:50.599741 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b9fc4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825712068, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 50, 504788234, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b9fc4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:50.784924 kubelet[1409]: W1002 19:37:50.784829 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:37:50.784924 kubelet[1409]: E1002 19:37:50.784855 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:37:50.796096 kubelet[1409]: E1002 19:37:50.796072 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:50.799077 kubelet[1409]: E1002 19:37:50.798994 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742bb420", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825717280, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 50, 504791325, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742bb420" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:50.942562 kubelet[1409]: W1002 19:37:50.942523 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:37:50.942562 kubelet[1409]: E1002 19:37:50.942552 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:37:51.086428 kubelet[1409]: W1002 19:37:51.086305 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:37:51.086428 kubelet[1409]: E1002 19:37:51.086346 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:37:51.217512 kubelet[1409]: E1002 19:37:51.217469 1409 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:37:51.307716 kubelet[1409]: I1002 19:37:51.307673 1409 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:37:51.308850 kubelet[1409]: E1002 19:37:51.308824 1409 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:37:51.308898 kubelet[1409]: E1002 19:37:51.308797 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b8272", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825704562, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 51, 307621189, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b8272" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:51.309966 kubelet[1409]: E1002 19:37:51.309895 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b9fc4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825712068, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 51, 307630940, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b9fc4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:51.396387 kubelet[1409]: W1002 19:37:51.396277 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:37:51.396387 kubelet[1409]: E1002 19:37:51.396306 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:37:51.399071 kubelet[1409]: E1002 19:37:51.398989 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742bb420", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825717280, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 51, 307634343, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742bb420" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:51.797026 kubelet[1409]: E1002 19:37:51.796880 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:52.797657 kubelet[1409]: E1002 19:37:52.797604 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:52.819468 kubelet[1409]: E1002 19:37:52.819436 1409 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:37:52.909526 kubelet[1409]: I1002 19:37:52.909487 1409 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:37:52.911237 kubelet[1409]: E1002 19:37:52.911196 1409 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:37:52.911374 kubelet[1409]: E1002 19:37:52.911244 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b8272", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825704562, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 52, 909449986, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b8272" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:52.912316 kubelet[1409]: E1002 19:37:52.912253 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b9fc4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825712068, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 52, 909458915, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b9fc4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:52.913009 kubelet[1409]: E1002 19:37:52.912949 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742bb420", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825717280, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 52, 909461646, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742bb420" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:53.385721 kubelet[1409]: W1002 19:37:53.385682 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:37:53.385721 kubelet[1409]: E1002 19:37:53.385713 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:37:53.678509 kubelet[1409]: W1002 19:37:53.678400 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:37:53.678509 kubelet[1409]: E1002 19:37:53.678437 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:37:53.798228 kubelet[1409]: E1002 19:37:53.798158 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:54.085103 kubelet[1409]: W1002 19:37:54.084994 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:37:54.085103 kubelet[1409]: E1002 19:37:54.085033 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:37:54.109754 kubelet[1409]: W1002 19:37:54.109719 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:37:54.109754 kubelet[1409]: E1002 19:37:54.109754 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:37:54.798682 kubelet[1409]: E1002 19:37:54.798585 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:55.799556 kubelet[1409]: E1002 19:37:55.799505 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:56.021547 kubelet[1409]: E1002 19:37:56.021491 1409 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:37:56.113077 kubelet[1409]: I1002 19:37:56.112856 1409 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:37:56.114206 kubelet[1409]: E1002 19:37:56.114130 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b8272", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825704562, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 56, 112820556, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b8272" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:56.114336 kubelet[1409]: E1002 19:37:56.114220 1409 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:37:56.115027 kubelet[1409]: E1002 19:37:56.114936 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742b9fc4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825712068, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 56, 112828491, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742b9fc4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:56.115814 kubelet[1409]: E1002 19:37:56.115769 1409 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a618d742bb420", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 37, 49, 825717280, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 37, 56, 112831013, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a618d742bb420" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:37:56.800058 kubelet[1409]: E1002 19:37:56.799998 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:57.800372 kubelet[1409]: E1002 19:37:57.800307 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:58.008900 kubelet[1409]: W1002 19:37:58.008850 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:37:58.008900 kubelet[1409]: E1002 19:37:58.008882 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:37:58.144762 kubelet[1409]: W1002 19:37:58.144635 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:37:58.144762 kubelet[1409]: E1002 19:37:58.144663 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:37:58.304657 kubelet[1409]: W1002 19:37:58.304613 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:37:58.304657 kubelet[1409]: E1002 19:37:58.304645 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:37:58.800529 kubelet[1409]: E1002 19:37:58.800470 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:59.328425 kubelet[1409]: W1002 19:37:59.328378 1409 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:37:59.328425 kubelet[1409]: E1002 19:37:59.328412 1409 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:37:59.789058 kubelet[1409]: I1002 19:37:59.788892 1409 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:37:59.801324 kubelet[1409]: E1002 19:37:59.801278 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:59.853722 kubelet[1409]: E1002 19:37:59.853666 1409 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:38:00.172913 kubelet[1409]: E1002 19:38:00.172779 1409 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.12" not found Oct 2 19:38:00.801743 kubelet[1409]: E1002 19:38:00.801698 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:01.218043 kubelet[1409]: E1002 19:38:01.217914 1409 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.12" not found Oct 2 19:38:01.802042 kubelet[1409]: E1002 19:38:01.801976 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:02.426230 kubelet[1409]: E1002 19:38:02.426181 1409 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.12\" not found" node="10.0.0.12" Oct 2 19:38:02.515378 kubelet[1409]: I1002 19:38:02.515340 1409 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:38:02.640496 kubelet[1409]: I1002 19:38:02.640454 1409 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.12" Oct 2 19:38:02.793085 kubelet[1409]: E1002 19:38:02.792989 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:02.802267 kubelet[1409]: E1002 19:38:02.802224 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:02.830530 sudo[1231]: pam_unix(sudo:session): session closed for user root Oct 2 19:38:02.829000 audit[1231]: USER_END pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:38:02.831197 kernel: kauditd_printk_skb: 456 callbacks suppressed Oct 2 19:38:02.831261 kernel: audit: type=1106 audit(1696275482.829:572): pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:38:02.829000 audit[1231]: CRED_DISP pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:38:02.834022 sshd[1227]: pam_unix(sshd:session): session closed for user core Oct 2 19:38:02.835919 kernel: audit: type=1104 audit(1696275482.829:573): pid=1231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:38:02.835971 kernel: audit: type=1106 audit(1696275482.833:574): pid=1227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:38:02.833000 audit[1227]: USER_END pid=1227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:38:02.836818 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:49340.service: Deactivated successfully. Oct 2 19:38:02.837416 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:38:02.837983 systemd-logind[1100]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:38:02.838812 systemd-logind[1100]: Removed session 7. Oct 2 19:38:02.833000 audit[1227]: CRED_DISP pid=1227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:38:02.841539 kernel: audit: type=1104 audit(1696275482.833:575): pid=1227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:38:02.841586 kernel: audit: type=1131 audit(1696275482.835:576): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:49340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:02.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:49340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:38:02.894189 kubelet[1409]: E1002 19:38:02.894120 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:02.994816 kubelet[1409]: E1002 19:38:02.994778 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:03.095749 kubelet[1409]: E1002 19:38:03.095625 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:03.196285 kubelet[1409]: E1002 19:38:03.196226 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:03.296798 kubelet[1409]: E1002 19:38:03.296736 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:03.397491 kubelet[1409]: E1002 19:38:03.397373 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:03.497973 kubelet[1409]: E1002 19:38:03.497916 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:03.598540 kubelet[1409]: E1002 19:38:03.598489 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:03.699137 kubelet[1409]: E1002 19:38:03.699008 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:03.799760 kubelet[1409]: E1002 19:38:03.799684 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:03.803027 kubelet[1409]: E1002 19:38:03.802978 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:03.900678 kubelet[1409]: E1002 19:38:03.900614 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.001260 kubelet[1409]: E1002 19:38:04.001119 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.101596 kubelet[1409]: E1002 19:38:04.101550 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.202042 kubelet[1409]: E1002 19:38:04.202005 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.302606 kubelet[1409]: E1002 19:38:04.302527 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.403057 kubelet[1409]: E1002 19:38:04.403025 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.503508 kubelet[1409]: E1002 19:38:04.503481 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.604186 kubelet[1409]: E1002 19:38:04.604076 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.705059 kubelet[1409]: E1002 19:38:04.705007 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.803832 kubelet[1409]: E1002 19:38:04.803778 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:04.806033 kubelet[1409]: E1002 19:38:04.805994 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:04.906398 kubelet[1409]: E1002 19:38:04.906292 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.007042 kubelet[1409]: E1002 19:38:05.006994 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.107584 kubelet[1409]: E1002 19:38:05.107527 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.208250 kubelet[1409]: E1002 19:38:05.208116 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.308742 kubelet[1409]: E1002 19:38:05.308692 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.409352 kubelet[1409]: E1002 19:38:05.409305 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.509993 kubelet[1409]: E1002 19:38:05.509880 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.610475 kubelet[1409]: E1002 19:38:05.610423 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.711107 kubelet[1409]: E1002 19:38:05.711067 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.805002 kubelet[1409]: E1002 19:38:05.804895 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:05.812135 kubelet[1409]: E1002 19:38:05.812111 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:05.912804 kubelet[1409]: E1002 19:38:05.912761 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.013126 kubelet[1409]: E1002 19:38:06.013081 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.113953 kubelet[1409]: E1002 19:38:06.113884 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.214746 kubelet[1409]: E1002 19:38:06.214689 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.315603 kubelet[1409]: E1002 19:38:06.315549 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.416230 kubelet[1409]: E1002 19:38:06.416088 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.516815 kubelet[1409]: E1002 19:38:06.516752 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.617337 kubelet[1409]: E1002 19:38:06.617286 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.717976 kubelet[1409]: E1002 19:38:06.717839 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.805620 kubelet[1409]: E1002 19:38:06.805555 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:06.818817 kubelet[1409]: E1002 19:38:06.818785 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:06.919620 kubelet[1409]: E1002 19:38:06.919556 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.020052 kubelet[1409]: E1002 19:38:07.019948 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.120465 kubelet[1409]: E1002 19:38:07.120410 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.220944 kubelet[1409]: E1002 19:38:07.220896 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.321929 kubelet[1409]: E1002 19:38:07.321698 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.422182 kubelet[1409]: E1002 19:38:07.422135 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.522685 kubelet[1409]: E1002 19:38:07.522643 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.623518 kubelet[1409]: E1002 19:38:07.623400 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.724087 kubelet[1409]: E1002 19:38:07.724025 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.805787 kubelet[1409]: E1002 19:38:07.805732 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:07.825055 kubelet[1409]: E1002 19:38:07.825006 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:07.925783 kubelet[1409]: E1002 19:38:07.925660 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.026325 kubelet[1409]: E1002 19:38:08.026237 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.126872 kubelet[1409]: E1002 19:38:08.126814 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.227494 kubelet[1409]: E1002 19:38:08.227368 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.328161 kubelet[1409]: E1002 19:38:08.328095 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.428892 kubelet[1409]: E1002 19:38:08.428818 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.529682 kubelet[1409]: E1002 19:38:08.529529 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.630286 kubelet[1409]: E1002 19:38:08.630220 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.730869 kubelet[1409]: E1002 19:38:08.730812 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.806444 kubelet[1409]: E1002 19:38:08.806337 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:08.831469 kubelet[1409]: E1002 19:38:08.831444 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:08.932099 kubelet[1409]: E1002 19:38:08.932052 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.032827 kubelet[1409]: E1002 19:38:09.032761 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.133522 kubelet[1409]: E1002 19:38:09.133391 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.234333 kubelet[1409]: E1002 19:38:09.234262 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.335188 kubelet[1409]: E1002 19:38:09.335082 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.435979 kubelet[1409]: E1002 19:38:09.435830 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.536419 kubelet[1409]: E1002 19:38:09.536354 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.636908 kubelet[1409]: E1002 19:38:09.636853 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.737452 kubelet[1409]: E1002 19:38:09.737315 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.796885 kubelet[1409]: E1002 19:38:09.796810 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:09.807327 kubelet[1409]: E1002 19:38:09.807279 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:09.837647 kubelet[1409]: E1002 19:38:09.837594 1409 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:38:09.854842 kubelet[1409]: E1002 19:38:09.854788 1409 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:38:09.938852 kubelet[1409]: I1002 19:38:09.938809 1409 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:38:09.939238 env[1112]: time="2023-10-02T19:38:09.939192635Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:38:09.939552 kubelet[1409]: I1002 19:38:09.939482 1409 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:38:10.807558 kubelet[1409]: I1002 19:38:10.807510 1409 apiserver.go:52] "Watching apiserver" Oct 2 19:38:10.807894 kubelet[1409]: E1002 19:38:10.807534 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:10.810148 kubelet[1409]: I1002 19:38:10.810122 1409 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:38:10.810214 kubelet[1409]: I1002 19:38:10.810188 1409 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:38:10.815031 systemd[1]: Created slice kubepods-besteffort-pod0fcc6f6c_a63d_4eef_8823_6b3798b2be74.slice. Oct 2 19:38:10.825778 systemd[1]: Created slice kubepods-burstable-pode63a566e_4cf3_47b6_b4e3_c31a6f6fcd6d.slice. Oct 2 19:38:10.901044 kubelet[1409]: I1002 19:38:10.900990 1409 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:38:10.917832 kubelet[1409]: I1002 19:38:10.917795 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-cgroup\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.917832 kubelet[1409]: I1002 19:38:10.917827 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cni-path\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918031 kubelet[1409]: I1002 19:38:10.917847 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-config-path\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918031 kubelet[1409]: I1002 19:38:10.917865 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-host-proc-sys-net\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918031 kubelet[1409]: I1002 19:38:10.917911 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fcc6f6c-a63d-4eef-8823-6b3798b2be74-xtables-lock\") pod \"kube-proxy-zz9pp\" (UID: \"0fcc6f6c-a63d-4eef-8823-6b3798b2be74\") " pod="kube-system/kube-proxy-zz9pp" Oct 2 19:38:10.918031 kubelet[1409]: I1002 19:38:10.917974 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fcc6f6c-a63d-4eef-8823-6b3798b2be74-lib-modules\") pod \"kube-proxy-zz9pp\" (UID: \"0fcc6f6c-a63d-4eef-8823-6b3798b2be74\") " pod="kube-system/kube-proxy-zz9pp" Oct 2 19:38:10.918031 kubelet[1409]: I1002 19:38:10.918006 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jgpp\" (UniqueName: \"kubernetes.io/projected/0fcc6f6c-a63d-4eef-8823-6b3798b2be74-kube-api-access-7jgpp\") pod \"kube-proxy-zz9pp\" (UID: \"0fcc6f6c-a63d-4eef-8823-6b3798b2be74\") " pod="kube-system/kube-proxy-zz9pp" Oct 2 19:38:10.918159 kubelet[1409]: I1002 19:38:10.918128 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-run\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918354 kubelet[1409]: I1002 19:38:10.918221 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-hostproc\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918354 kubelet[1409]: I1002 19:38:10.918255 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-host-proc-sys-kernel\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918354 kubelet[1409]: I1002 19:38:10.918321 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf6rr\" (UniqueName: \"kubernetes.io/projected/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-kube-api-access-sf6rr\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918447 kubelet[1409]: I1002 19:38:10.918377 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fcc6f6c-a63d-4eef-8823-6b3798b2be74-kube-proxy\") pod \"kube-proxy-zz9pp\" (UID: \"0fcc6f6c-a63d-4eef-8823-6b3798b2be74\") " pod="kube-system/kube-proxy-zz9pp" Oct 2 19:38:10.918447 kubelet[1409]: I1002 19:38:10.918410 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-etc-cni-netd\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918447 kubelet[1409]: I1002 19:38:10.918436 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-lib-modules\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918521 kubelet[1409]: I1002 19:38:10.918466 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-xtables-lock\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918553 kubelet[1409]: I1002 19:38:10.918524 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-clustermesh-secrets\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918553 kubelet[1409]: I1002 19:38:10.918550 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-bpf-maps\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918596 kubelet[1409]: I1002 19:38:10.918572 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-hubble-tls\") pod \"cilium-ml547\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " pod="kube-system/cilium-ml547" Oct 2 19:38:10.918596 kubelet[1409]: I1002 19:38:10.918591 1409 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:38:11.124966 kubelet[1409]: E1002 19:38:11.124239 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:11.125151 env[1112]: time="2023-10-02T19:38:11.125100771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zz9pp,Uid:0fcc6f6c-a63d-4eef-8823-6b3798b2be74,Namespace:kube-system,Attempt:0,}" Oct 2 19:38:11.436486 kubelet[1409]: E1002 19:38:11.436339 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:11.437045 env[1112]: time="2023-10-02T19:38:11.436979815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ml547,Uid:e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d,Namespace:kube-system,Attempt:0,}" Oct 2 19:38:11.731897 env[1112]: time="2023-10-02T19:38:11.731786059Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:11.734176 env[1112]: time="2023-10-02T19:38:11.734132312Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:11.735881 env[1112]: time="2023-10-02T19:38:11.735853859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:11.737430 env[1112]: time="2023-10-02T19:38:11.737407145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:11.738959 env[1112]: time="2023-10-02T19:38:11.738930815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:11.740191 env[1112]: time="2023-10-02T19:38:11.740157911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:11.741606 env[1112]: time="2023-10-02T19:38:11.741565574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:11.744194 env[1112]: time="2023-10-02T19:38:11.744159423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:11.764121 env[1112]: time="2023-10-02T19:38:11.764035694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:38:11.764298 env[1112]: time="2023-10-02T19:38:11.764138276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:38:11.764298 env[1112]: time="2023-10-02T19:38:11.764198108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:38:11.764568 env[1112]: time="2023-10-02T19:38:11.764406897Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b pid=1509 runtime=io.containerd.runc.v2 Oct 2 19:38:11.766061 env[1112]: time="2023-10-02T19:38:11.765993352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:38:11.766061 env[1112]: time="2023-10-02T19:38:11.766027754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:38:11.766061 env[1112]: time="2023-10-02T19:38:11.766041790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:38:11.766355 env[1112]: time="2023-10-02T19:38:11.766282338Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d2e150724bcbc3d6444db1242b64205fb4da788e2583f9eb682a19f280b7956 pid=1506 runtime=io.containerd.runc.v2 Oct 2 19:38:11.780285 systemd[1]: Started cri-containerd-2d2e150724bcbc3d6444db1242b64205fb4da788e2583f9eb682a19f280b7956.scope. Oct 2 19:38:11.783093 systemd[1]: Started cri-containerd-7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b.scope. Oct 2 19:38:11.796210 kernel: audit: type=1400 audit(1696275491.790:577): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.796330 kernel: audit: type=1400 audit(1696275491.790:578): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.796372 kernel: audit: type=1400 audit(1696275491.790:579): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.800386 kernel: audit: type=1400 audit(1696275491.790:580): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.800433 kernel: audit: type=1400 audit(1696275491.790:581): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.807955 kernel: audit: type=1400 audit(1696275491.790:582): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.807997 kernel: audit: type=1400 audit(1696275491.790:583): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.808038 kubelet[1409]: E1002 19:38:11.808010 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:11.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.810702 kernel: audit: type=1400 audit(1696275491.790:584): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.813380 kernel: audit: type=1400 audit(1696275491.790:585): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.813425 kernel: audit: type=1400 audit(1696275491.794:586): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.794000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.794000 audit: BPF prog-id=67 op=LOAD Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=1506 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326531353037323462636263336436343434646231323432623634 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=1506 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326531353037323462636263336436343434646231323432623634 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.795000 audit: BPF prog-id=68 op=LOAD Oct 2 19:38:11.795000 audit[1529]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c000385290 items=0 ppid=1506 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326531353037323462636263336436343434646231323432623634 Oct 2 19:38:11.802000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.802000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.802000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.802000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.802000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.802000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.802000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.802000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.802000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.802000 audit: BPF prog-id=69 op=LOAD Oct 2 19:38:11.802000 audit[1529]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c0003852d8 items=0 ppid=1506 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326531353037323462636263336436343434646231323432623634 Oct 2 19:38:11.804000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:38:11.804000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { perfmon } for pid=1529 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.809000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.809000 audit: BPF prog-id=70 op=LOAD Oct 2 19:38:11.804000 audit[1529]: AVC avc: denied { bpf } for pid=1529 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.804000 audit: BPF prog-id=71 op=LOAD Oct 2 19:38:11.804000 audit[1529]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c0003856e8 items=0 ppid=1506 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264326531353037323462636263336436343434646231323432623634 Oct 2 19:38:11.809000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.809000 audit[1525]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1509 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336530646435363032653134333532633633666335343330373837 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1509 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336530646435363032653134333532633633666335343330373837 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit: BPF prog-id=72 op=LOAD Oct 2 19:38:11.814000 audit[1525]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00033c6f0 items=0 ppid=1509 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336530646435363032653134333532633633666335343330373837 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit: BPF prog-id=73 op=LOAD Oct 2 19:38:11.814000 audit[1525]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00033c738 items=0 ppid=1509 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336530646435363032653134333532633633666335343330373837 Oct 2 19:38:11.814000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:38:11.814000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { perfmon } for pid=1525 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit[1525]: AVC avc: denied { bpf } for pid=1525 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:11.814000 audit: BPF prog-id=74 op=LOAD Oct 2 19:38:11.814000 audit[1525]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00033cb48 items=0 ppid=1509 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:11.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766336530646435363032653134333532633633666335343330373837 Oct 2 19:38:11.832653 env[1112]: time="2023-10-02T19:38:11.832591781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zz9pp,Uid:0fcc6f6c-a63d-4eef-8823-6b3798b2be74,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d2e150724bcbc3d6444db1242b64205fb4da788e2583f9eb682a19f280b7956\"" Oct 2 19:38:11.836401 kubelet[1409]: E1002 19:38:11.836329 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:11.837877 env[1112]: time="2023-10-02T19:38:11.837813841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\"" Oct 2 19:38:11.879383 env[1112]: time="2023-10-02T19:38:11.879331480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ml547,Uid:e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\"" Oct 2 19:38:11.880071 kubelet[1409]: E1002 19:38:11.880055 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:12.024967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764595336.mount: Deactivated successfully. Oct 2 19:38:12.808561 kubelet[1409]: E1002 19:38:12.808467 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:13.240214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586609750.mount: Deactivated successfully. Oct 2 19:38:13.809019 kubelet[1409]: E1002 19:38:13.808983 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:13.914955 env[1112]: time="2023-10-02T19:38:13.914899846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:13.916455 env[1112]: time="2023-10-02T19:38:13.916401420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95433ef6ee1d55f93a09fe73299b8b95f623d791acd4da21a86bb749626df9ad,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:13.917758 env[1112]: time="2023-10-02T19:38:13.917701681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:13.918912 env[1112]: time="2023-10-02T19:38:13.918884274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d8c8e3e8fe630c3f2d84a22722d4891343196483ac4cc02c1ba9345b1bfc8a3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:13.919440 env[1112]: time="2023-10-02T19:38:13.919402715Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\" returns image reference \"sha256:95433ef6ee1d55f93a09fe73299b8b95f623d791acd4da21a86bb749626df9ad\"" Oct 2 19:38:13.920363 env[1112]: time="2023-10-02T19:38:13.920335940Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:38:13.921256 env[1112]: time="2023-10-02T19:38:13.921228279Z" level=info msg="CreateContainer within sandbox \"2d2e150724bcbc3d6444db1242b64205fb4da788e2583f9eb682a19f280b7956\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:38:13.933861 env[1112]: time="2023-10-02T19:38:13.933807177Z" level=info msg="CreateContainer within sandbox \"2d2e150724bcbc3d6444db1242b64205fb4da788e2583f9eb682a19f280b7956\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"be34d4aec5d52efbb45d175f8bff13f91b2f31d78c39c1da9916fdf9902399a7\"" Oct 2 19:38:13.934636 env[1112]: time="2023-10-02T19:38:13.934525840Z" level=info msg="StartContainer for \"be34d4aec5d52efbb45d175f8bff13f91b2f31d78c39c1da9916fdf9902399a7\"" Oct 2 19:38:13.955808 systemd[1]: Started cri-containerd-be34d4aec5d52efbb45d175f8bff13f91b2f31d78c39c1da9916fdf9902399a7.scope. Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1506 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333464346165633564353265666262343564313735663862666631 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.006000 audit: BPF prog-id=75 op=LOAD Oct 2 19:38:14.006000 audit[1578]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0000efc70 items=0 ppid=1506 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333464346165633564353265666262343564313735663862666631 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit: BPF prog-id=76 op=LOAD Oct 2 19:38:14.007000 audit[1578]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0000efcb8 items=0 ppid=1506 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333464346165633564353265666262343564313735663862666631 Oct 2 19:38:14.007000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:38:14.007000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:38:14.007000 audit: BPF prog-id=77 op=LOAD Oct 2 19:38:14.007000 audit[1578]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0000efd48 items=0 ppid=1506 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265333464346165633564353265666262343564313735663862666631 Oct 2 19:38:14.025376 env[1112]: time="2023-10-02T19:38:14.024923205Z" level=info msg="StartContainer for \"be34d4aec5d52efbb45d175f8bff13f91b2f31d78c39c1da9916fdf9902399a7\" returns successfully" Oct 2 19:38:14.067000 audit[1626]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.067000 audit[1626]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6a0acb00 a2=0 a3=7ffc6a0acaec items=0 ppid=1587 pid=1626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:38:14.068000 audit[1627]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.068000 audit[1627]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8d30ede0 a2=0 a3=7fff8d30edcc items=0 ppid=1587 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:38:14.070000 audit[1629]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=1629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.070000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc10160280 a2=0 a3=7ffc1016026c items=0 ppid=1587 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:38:14.071000 audit[1630]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=1630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.071000 audit[1630]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1def3510 a2=0 a3=7ffd1def34fc items=0 ppid=1587 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.071000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:38:14.072000 audit[1631]: NETFILTER_CFG table=filter:39 family=10 entries=1 op=nft_register_chain pid=1631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.072000 audit[1631]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9faaa570 a2=0 a3=7ffc9faaa55c items=0 ppid=1587 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:38:14.072000 audit[1632]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=1632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.072000 audit[1632]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2ff69f30 a2=0 a3=7ffe2ff69f1c items=0 ppid=1587 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:38:14.174000 audit[1633]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.174000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc01a978d0 a2=0 a3=7ffc01a978bc items=0 ppid=1587 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:38:14.176000 audit[1635]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.176000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc326dc640 a2=0 a3=7ffc326dc62c items=0 ppid=1587 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:38:14.180000 audit[1638]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.180000 audit[1638]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd94a5ca30 a2=0 a3=7ffd94a5ca1c items=0 ppid=1587 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.180000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:38:14.181000 audit[1639]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.181000 audit[1639]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd022613d0 a2=0 a3=7ffd022613bc items=0 ppid=1587 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:38:14.183000 audit[1641]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.183000 audit[1641]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff74e0a4e0 a2=0 a3=7fff74e0a4cc items=0 ppid=1587 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:38:14.184000 audit[1642]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.184000 audit[1642]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9f074cb0 a2=0 a3=7ffc9f074c9c items=0 ppid=1587 pid=1642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.184000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:38:14.187000 audit[1644]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.187000 audit[1644]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc9137e590 a2=0 a3=7ffc9137e57c items=0 ppid=1587 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:38:14.191000 audit[1647]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.191000 audit[1647]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeb9199370 a2=0 a3=7ffeb919935c items=0 ppid=1587 pid=1647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:38:14.192000 audit[1648]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.192000 audit[1648]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6e85ba10 a2=0 a3=7ffd6e85b9fc items=0 ppid=1587 pid=1648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:38:14.194000 audit[1650]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.194000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7459ad90 a2=0 a3=7ffe7459ad7c items=0 ppid=1587 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:38:14.195000 audit[1651]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.195000 audit[1651]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdda75d8b0 a2=0 a3=7ffdda75d89c items=0 ppid=1587 pid=1651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:38:14.198000 audit[1653]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.198000 audit[1653]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff74ffbf00 a2=0 a3=7fff74ffbeec items=0 ppid=1587 pid=1653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:38:14.202000 audit[1656]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.202000 audit[1656]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbc89da30 a2=0 a3=7ffcbc89da1c items=0 ppid=1587 pid=1656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:38:14.205000 audit[1659]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.205000 audit[1659]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb7587280 a2=0 a3=7ffcb758726c items=0 ppid=1587 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.205000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:38:14.206000 audit[1660]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.206000 audit[1660]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe5bff7f20 a2=0 a3=7ffe5bff7f0c items=0 ppid=1587 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:38:14.208000 audit[1662]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.208000 audit[1662]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffef5ae4710 a2=0 a3=7ffef5ae46fc items=0 ppid=1587 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:38:14.211000 audit[1665]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:38:14.211000 audit[1665]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc6bffd3a0 a2=0 a3=7ffc6bffd38c items=0 ppid=1587 pid=1665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:38:14.221000 audit[1669]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:38:14.221000 audit[1669]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffdd90edb50 a2=0 a3=7ffdd90edb3c items=0 ppid=1587 pid=1669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:38:14.228000 audit[1669]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:38:14.228000 audit[1669]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffdd90edb50 a2=0 a3=7ffdd90edb3c items=0 ppid=1587 pid=1669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:38:14.234000 audit[1675]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1675 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.234000 audit[1675]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc55343d10 a2=0 a3=7ffc55343cfc items=0 ppid=1587 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:38:14.236000 audit[1677]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1677 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.236000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffed09b44f0 a2=0 a3=7ffed09b44dc items=0 ppid=1587 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:38:14.240000 audit[1680]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1680 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.240000 audit[1680]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe21712710 a2=0 a3=7ffe217126fc items=0 ppid=1587 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:38:14.241000 audit[1681]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1681 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.241000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff03795570 a2=0 a3=7fff0379555c items=0 ppid=1587 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:38:14.243000 audit[1683]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1683 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.243000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea8ba6970 a2=0 a3=7ffea8ba695c items=0 ppid=1587 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.243000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:38:14.244000 audit[1684]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.244000 audit[1684]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd60891600 a2=0 a3=7ffd608915ec items=0 ppid=1587 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:38:14.246000 audit[1686]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.246000 audit[1686]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcc27aaf20 a2=0 a3=7ffcc27aaf0c items=0 ppid=1587 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:38:14.249000 audit[1689]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.249000 audit[1689]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc06da78d0 a2=0 a3=7ffc06da78bc items=0 ppid=1587 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.249000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:38:14.250000 audit[1690]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1690 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.250000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc41784110 a2=0 a3=7ffc417840fc items=0 ppid=1587 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.250000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:38:14.252000 audit[1692]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1692 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.252000 audit[1692]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3589b660 a2=0 a3=7fff3589b64c items=0 ppid=1587 pid=1692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:38:14.253000 audit[1693]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1693 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.253000 audit[1693]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffebcda7910 a2=0 a3=7ffebcda78fc items=0 ppid=1587 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:38:14.256000 audit[1695]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.256000 audit[1695]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4fdf5b30 a2=0 a3=7fff4fdf5b1c items=0 ppid=1587 pid=1695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:38:14.259000 audit[1698]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.259000 audit[1698]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe5e040250 a2=0 a3=7ffe5e04023c items=0 ppid=1587 pid=1698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:38:14.262000 audit[1701]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1701 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.262000 audit[1701]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6fd248a0 a2=0 a3=7ffd6fd2488c items=0 ppid=1587 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.262000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:38:14.263000 audit[1702]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1702 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.263000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea6571f40 a2=0 a3=7ffea6571f2c items=0 ppid=1587 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:38:14.265000 audit[1704]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1704 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.265000 audit[1704]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffeb7891d70 a2=0 a3=7ffeb7891d5c items=0 ppid=1587 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:38:14.268000 audit[1707]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1707 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:38:14.268000 audit[1707]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffde5c74280 a2=0 a3=7ffde5c7426c items=0 ppid=1587 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:38:14.274000 audit[1711]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:38:14.274000 audit[1711]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdea3f45b0 a2=0 a3=7ffdea3f459c items=0 ppid=1587 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.274000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:38:14.274000 audit[1711]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:38:14.274000 audit[1711]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffdea3f45b0 a2=0 a3=7ffdea3f459c items=0 ppid=1587 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:38:14.274000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:38:14.809400 kubelet[1409]: E1002 19:38:14.809333 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:14.989268 kubelet[1409]: E1002 19:38:14.989225 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:14.995848 kubelet[1409]: I1002 19:38:14.995809 1409 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zz9pp" podStartSLOduration=-9.223372023859015e+09 pod.CreationTimestamp="2023-10-02 19:38:02 +0000 UTC" firstStartedPulling="2023-10-02 19:38:11.837343599 +0000 UTC m=+22.537417629" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:38:14.995647405 +0000 UTC m=+25.695721435" watchObservedRunningTime="2023-10-02 19:38:14.995761007 +0000 UTC m=+25.695835037" Oct 2 19:38:15.810425 kubelet[1409]: E1002 19:38:15.810366 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:15.990947 kubelet[1409]: E1002 19:38:15.990924 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:16.810664 kubelet[1409]: E1002 19:38:16.810610 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:17.811716 kubelet[1409]: E1002 19:38:17.811652 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:18.812484 kubelet[1409]: E1002 19:38:18.812432 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:19.812631 kubelet[1409]: E1002 19:38:19.812586 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:20.813643 kubelet[1409]: E1002 19:38:20.813576 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:20.943785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984316355.mount: Deactivated successfully. Oct 2 19:38:21.815392 kubelet[1409]: E1002 19:38:21.815315 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:22.816189 kubelet[1409]: E1002 19:38:22.816134 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:23.816642 kubelet[1409]: E1002 19:38:23.816595 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:24.817362 kubelet[1409]: E1002 19:38:24.817305 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:25.817898 kubelet[1409]: E1002 19:38:25.817843 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:26.426183 env[1112]: time="2023-10-02T19:38:26.426107762Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:26.428031 env[1112]: time="2023-10-02T19:38:26.427977725Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:26.430092 env[1112]: time="2023-10-02T19:38:26.430046462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:38:26.430859 env[1112]: time="2023-10-02T19:38:26.430829071Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 19:38:26.433023 env[1112]: time="2023-10-02T19:38:26.432977198Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:38:26.450279 env[1112]: time="2023-10-02T19:38:26.450231493Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\"" Oct 2 19:38:26.450726 env[1112]: time="2023-10-02T19:38:26.450697112Z" level=info msg="StartContainer for \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\"" Oct 2 19:38:26.485781 systemd[1]: Started cri-containerd-967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03.scope. Oct 2 19:38:26.504512 systemd[1]: cri-containerd-967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03.scope: Deactivated successfully. Oct 2 19:38:26.509367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03-rootfs.mount: Deactivated successfully. Oct 2 19:38:26.780209 env[1112]: time="2023-10-02T19:38:26.780054796Z" level=info msg="shim disconnected" id=967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03 Oct 2 19:38:26.780209 env[1112]: time="2023-10-02T19:38:26.780108057Z" level=warning msg="cleaning up after shim disconnected" id=967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03 namespace=k8s.io Oct 2 19:38:26.780209 env[1112]: time="2023-10-02T19:38:26.780117003Z" level=info msg="cleaning up dead shim" Oct 2 19:38:26.793112 env[1112]: time="2023-10-02T19:38:26.793034105Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:38:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1744 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:38:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:38:26.793501 env[1112]: time="2023-10-02T19:38:26.793375700Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:38:26.793727 env[1112]: time="2023-10-02T19:38:26.793642855Z" level=error msg="Failed to pipe stderr of container \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\"" error="reading from a closed fifo" Oct 2 19:38:26.797279 env[1112]: time="2023-10-02T19:38:26.797235261Z" level=error msg="Failed to pipe stdout of container \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\"" error="reading from a closed fifo" Oct 2 19:38:26.800504 env[1112]: time="2023-10-02T19:38:26.800420978Z" level=error msg="StartContainer for \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:38:26.800735 kubelet[1409]: E1002 19:38:26.800706 1409 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03" Oct 2 19:38:26.800845 kubelet[1409]: E1002 19:38:26.800833 1409 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:38:26.800845 kubelet[1409]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:38:26.800845 kubelet[1409]: rm /hostbin/cilium-mount Oct 2 19:38:26.800845 kubelet[1409]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sf6rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:38:26.801044 kubelet[1409]: E1002 19:38:26.800873 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:38:26.818427 kubelet[1409]: E1002 19:38:26.818379 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:27.006428 kubelet[1409]: E1002 19:38:27.006378 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:27.008016 env[1112]: time="2023-10-02T19:38:27.007972290Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:38:27.022261 env[1112]: time="2023-10-02T19:38:27.022199165Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\"" Oct 2 19:38:27.022675 env[1112]: time="2023-10-02T19:38:27.022654576Z" level=info msg="StartContainer for \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\"" Oct 2 19:38:27.041742 systemd[1]: Started cri-containerd-cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58.scope. Oct 2 19:38:27.051583 systemd[1]: cri-containerd-cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58.scope: Deactivated successfully. Oct 2 19:38:27.051900 systemd[1]: Stopped cri-containerd-cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58.scope. Oct 2 19:38:27.061318 env[1112]: time="2023-10-02T19:38:27.061242614Z" level=info msg="shim disconnected" id=cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58 Oct 2 19:38:27.061318 env[1112]: time="2023-10-02T19:38:27.061311133Z" level=warning msg="cleaning up after shim disconnected" id=cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58 namespace=k8s.io Oct 2 19:38:27.061318 env[1112]: time="2023-10-02T19:38:27.061321563Z" level=info msg="cleaning up dead shim" Oct 2 19:38:27.072678 env[1112]: time="2023-10-02T19:38:27.072611522Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:38:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1780 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:38:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:38:27.072959 env[1112]: time="2023-10-02T19:38:27.072888415Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:38:27.073192 env[1112]: time="2023-10-02T19:38:27.073102830Z" level=error msg="Failed to pipe stdout of container \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\"" error="reading from a closed fifo" Oct 2 19:38:27.073260 env[1112]: time="2023-10-02T19:38:27.073223368Z" level=error msg="Failed to pipe stderr of container \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\"" error="reading from a closed fifo" Oct 2 19:38:27.075521 env[1112]: time="2023-10-02T19:38:27.075472074Z" level=error msg="StartContainer for \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:38:27.075767 kubelet[1409]: E1002 19:38:27.075739 1409 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58" Oct 2 19:38:27.075902 kubelet[1409]: E1002 19:38:27.075879 1409 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:38:27.075902 kubelet[1409]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:38:27.075902 kubelet[1409]: rm /hostbin/cilium-mount Oct 2 19:38:27.075902 kubelet[1409]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sf6rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:38:27.076143 kubelet[1409]: E1002 19:38:27.075926 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:38:27.818754 kubelet[1409]: E1002 19:38:27.818708 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:28.008682 kubelet[1409]: I1002 19:38:28.008641 1409 scope.go:115] "RemoveContainer" containerID="967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03" Oct 2 19:38:28.008862 kubelet[1409]: I1002 19:38:28.008843 1409 scope.go:115] "RemoveContainer" containerID="967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03" Oct 2 19:38:28.009902 env[1112]: time="2023-10-02T19:38:28.009863053Z" level=info msg="RemoveContainer for \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\"" Oct 2 19:38:28.010328 env[1112]: time="2023-10-02T19:38:28.010303244Z" level=info msg="RemoveContainer for \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\"" Oct 2 19:38:28.010440 env[1112]: time="2023-10-02T19:38:28.010407420Z" level=error msg="RemoveContainer for \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\" failed" error="failed to set removing state for container \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\": container is already in removing state" Oct 2 19:38:28.010675 kubelet[1409]: E1002 19:38:28.010656 1409 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\": container is already in removing state" containerID="967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03" Oct 2 19:38:28.010733 kubelet[1409]: E1002 19:38:28.010710 1409 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03": container is already in removing state; Skipping pod "cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)" Oct 2 19:38:28.010827 kubelet[1409]: E1002 19:38:28.010779 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:28.011054 kubelet[1409]: E1002 19:38:28.011039 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:38:28.012833 env[1112]: time="2023-10-02T19:38:28.012806780Z" level=info msg="RemoveContainer for \"967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03\" returns successfully" Oct 2 19:38:28.819401 kubelet[1409]: E1002 19:38:28.819364 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:29.010968 kubelet[1409]: E1002 19:38:29.010926 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:29.011183 kubelet[1409]: E1002 19:38:29.011159 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:38:29.796281 kubelet[1409]: E1002 19:38:29.796232 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:29.819646 kubelet[1409]: E1002 19:38:29.819619 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:29.885233 kubelet[1409]: W1002 19:38:29.885197 1409 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode63a566e_4cf3_47b6_b4e3_c31a6f6fcd6d.slice/cri-containerd-967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03.scope WatchSource:0}: container "967a78c6585c28f3aaa0b06bfd934d8036531f14176f2ea49eb8349b2a315c03" in namespace "k8s.io": not found Oct 2 19:38:30.166729 update_engine[1103]: I1002 19:38:30.166662 1103 update_attempter.cc:505] Updating boot flags... Oct 2 19:38:30.820223 kubelet[1409]: E1002 19:38:30.820149 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:31.820927 kubelet[1409]: E1002 19:38:31.820893 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:32.821438 kubelet[1409]: E1002 19:38:32.821382 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:32.990930 kubelet[1409]: W1002 19:38:32.990884 1409 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode63a566e_4cf3_47b6_b4e3_c31a6f6fcd6d.slice/cri-containerd-cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58.scope WatchSource:0}: task cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58 not found: not found Oct 2 19:38:33.822067 kubelet[1409]: E1002 19:38:33.821986 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:34.822568 kubelet[1409]: E1002 19:38:34.822499 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:35.823477 kubelet[1409]: E1002 19:38:35.823418 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:36.824152 kubelet[1409]: E1002 19:38:36.824106 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:37.824946 kubelet[1409]: E1002 19:38:37.824897 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:38.825235 kubelet[1409]: E1002 19:38:38.825186 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:39.825804 kubelet[1409]: E1002 19:38:39.825747 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:40.826227 kubelet[1409]: E1002 19:38:40.826160 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:41.826642 kubelet[1409]: E1002 19:38:41.826566 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:42.827307 kubelet[1409]: E1002 19:38:42.827262 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:43.827926 kubelet[1409]: E1002 19:38:43.827858 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:43.950278 kubelet[1409]: E1002 19:38:43.950232 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:43.952249 env[1112]: time="2023-10-02T19:38:43.952194078Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:38:44.316824 env[1112]: time="2023-10-02T19:38:44.316699648Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\"" Oct 2 19:38:44.317388 env[1112]: time="2023-10-02T19:38:44.317345384Z" level=info msg="StartContainer for \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\"" Oct 2 19:38:44.342111 systemd[1]: Started cri-containerd-3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b.scope. Oct 2 19:38:44.356504 systemd[1]: cri-containerd-3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b.scope: Deactivated successfully. Oct 2 19:38:44.364438 env[1112]: time="2023-10-02T19:38:44.364395381Z" level=info msg="shim disconnected" id=3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b Oct 2 19:38:44.364550 env[1112]: time="2023-10-02T19:38:44.364444203Z" level=warning msg="cleaning up after shim disconnected" id=3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b namespace=k8s.io Oct 2 19:38:44.364550 env[1112]: time="2023-10-02T19:38:44.364453390Z" level=info msg="cleaning up dead shim" Oct 2 19:38:44.385391 env[1112]: time="2023-10-02T19:38:44.385353996Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:38:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1834 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:38:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:38:44.385615 env[1112]: time="2023-10-02T19:38:44.385563010Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:38:44.385829 env[1112]: time="2023-10-02T19:38:44.385782232Z" level=error msg="Failed to pipe stderr of container \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\"" error="reading from a closed fifo" Oct 2 19:38:44.385829 env[1112]: time="2023-10-02T19:38:44.385785157Z" level=error msg="Failed to pipe stdout of container \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\"" error="reading from a closed fifo" Oct 2 19:38:44.387400 env[1112]: time="2023-10-02T19:38:44.387349891Z" level=error msg="StartContainer for \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:38:44.387657 kubelet[1409]: E1002 19:38:44.387563 1409 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b" Oct 2 19:38:44.387733 kubelet[1409]: E1002 19:38:44.387696 1409 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:38:44.387733 kubelet[1409]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:38:44.387733 kubelet[1409]: rm /hostbin/cilium-mount Oct 2 19:38:44.387733 kubelet[1409]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sf6rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:38:44.387866 kubelet[1409]: E1002 19:38:44.387742 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:38:44.828158 kubelet[1409]: E1002 19:38:44.828113 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:45.032607 kubelet[1409]: I1002 19:38:45.032574 1409 scope.go:115] "RemoveContainer" containerID="cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58" Oct 2 19:38:45.033050 kubelet[1409]: I1002 19:38:45.032919 1409 scope.go:115] "RemoveContainer" containerID="cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58" Oct 2 19:38:45.033461 env[1112]: time="2023-10-02T19:38:45.033422101Z" level=info msg="RemoveContainer for \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\"" Oct 2 19:38:45.033794 env[1112]: time="2023-10-02T19:38:45.033750950Z" level=info msg="RemoveContainer for \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\"" Oct 2 19:38:45.033855 env[1112]: time="2023-10-02T19:38:45.033827434Z" level=error msg="RemoveContainer for \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\" failed" error="failed to set removing state for container \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\": container is already in removing state" Oct 2 19:38:45.033973 kubelet[1409]: E1002 19:38:45.033954 1409 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\": container is already in removing state" containerID="cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58" Oct 2 19:38:45.034049 kubelet[1409]: E1002 19:38:45.033985 1409 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58": container is already in removing state; Skipping pod "cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)" Oct 2 19:38:45.034049 kubelet[1409]: E1002 19:38:45.034042 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:45.034277 kubelet[1409]: E1002 19:38:45.034251 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:38:45.141523 env[1112]: time="2023-10-02T19:38:45.141470225Z" level=info msg="RemoveContainer for \"cdd3c6d0f7f531604bf6daf48fcecc92c9da6a8a66d05ce3ceb764136e9eaa58\" returns successfully" Oct 2 19:38:45.214844 systemd[1]: run-containerd-runc-k8s.io-3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b-runc.voUotM.mount: Deactivated successfully. Oct 2 19:38:45.214959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b-rootfs.mount: Deactivated successfully. Oct 2 19:38:45.828797 kubelet[1409]: E1002 19:38:45.828743 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:46.829545 kubelet[1409]: E1002 19:38:46.829478 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:47.471829 kubelet[1409]: W1002 19:38:47.471756 1409 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode63a566e_4cf3_47b6_b4e3_c31a6f6fcd6d.slice/cri-containerd-3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b.scope WatchSource:0}: task 3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b not found: not found Oct 2 19:38:47.830779 kubelet[1409]: E1002 19:38:47.830609 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:48.831507 kubelet[1409]: E1002 19:38:48.831451 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:49.796463 kubelet[1409]: E1002 19:38:49.796405 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:49.831778 kubelet[1409]: E1002 19:38:49.831726 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:50.832872 kubelet[1409]: E1002 19:38:50.832800 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:51.833328 kubelet[1409]: E1002 19:38:51.833253 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:52.833456 kubelet[1409]: E1002 19:38:52.833405 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:53.833785 kubelet[1409]: E1002 19:38:53.833720 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:54.834562 kubelet[1409]: E1002 19:38:54.834496 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:55.835738 kubelet[1409]: E1002 19:38:55.835669 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:56.836101 kubelet[1409]: E1002 19:38:56.836052 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:57.836177 kubelet[1409]: E1002 19:38:57.836142 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:58.836626 kubelet[1409]: E1002 19:38:58.836585 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:38:58.949979 kubelet[1409]: E1002 19:38:58.949940 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:38:58.950214 kubelet[1409]: E1002 19:38:58.950196 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:38:59.837210 kubelet[1409]: E1002 19:38:59.837159 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:00.838186 kubelet[1409]: E1002 19:39:00.838110 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:01.838911 kubelet[1409]: E1002 19:39:01.838849 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:02.839434 kubelet[1409]: E1002 19:39:02.839370 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:03.839876 kubelet[1409]: E1002 19:39:03.839830 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:04.840513 kubelet[1409]: E1002 19:39:04.840460 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:05.840647 kubelet[1409]: E1002 19:39:05.840592 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:06.840980 kubelet[1409]: E1002 19:39:06.840938 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:07.841249 kubelet[1409]: E1002 19:39:07.841194 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:08.841663 kubelet[1409]: E1002 19:39:08.841615 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:09.796792 kubelet[1409]: E1002 19:39:09.796743 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:09.842117 kubelet[1409]: E1002 19:39:09.842051 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:10.842475 kubelet[1409]: E1002 19:39:10.842445 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:10.949533 kubelet[1409]: E1002 19:39:10.949505 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:10.960783 env[1112]: time="2023-10-02T19:39:10.960730952Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:39:11.510601 env[1112]: time="2023-10-02T19:39:11.510517242Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\"" Oct 2 19:39:11.511154 env[1112]: time="2023-10-02T19:39:11.511122910Z" level=info msg="StartContainer for \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\"" Oct 2 19:39:11.527524 systemd[1]: Started cri-containerd-c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981.scope. Oct 2 19:39:11.534423 systemd[1]: cri-containerd-c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981.scope: Deactivated successfully. Oct 2 19:39:11.534719 systemd[1]: Stopped cri-containerd-c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981.scope. Oct 2 19:39:11.542458 env[1112]: time="2023-10-02T19:39:11.542395945Z" level=info msg="shim disconnected" id=c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981 Oct 2 19:39:11.542458 env[1112]: time="2023-10-02T19:39:11.542446059Z" level=warning msg="cleaning up after shim disconnected" id=c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981 namespace=k8s.io Oct 2 19:39:11.542458 env[1112]: time="2023-10-02T19:39:11.542454434Z" level=info msg="cleaning up dead shim" Oct 2 19:39:11.549132 env[1112]: time="2023-10-02T19:39:11.549073380Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:39:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1872 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:39:11Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:39:11.549365 env[1112]: time="2023-10-02T19:39:11.549317028Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:39:11.549539 env[1112]: time="2023-10-02T19:39:11.549489112Z" level=error msg="Failed to pipe stdout of container \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\"" error="reading from a closed fifo" Oct 2 19:39:11.549539 env[1112]: time="2023-10-02T19:39:11.549498439Z" level=error msg="Failed to pipe stderr of container \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\"" error="reading from a closed fifo" Oct 2 19:39:11.551734 env[1112]: time="2023-10-02T19:39:11.551695195Z" level=error msg="StartContainer for \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:39:11.551932 kubelet[1409]: E1002 19:39:11.551902 1409 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981" Oct 2 19:39:11.552072 kubelet[1409]: E1002 19:39:11.552021 1409 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:39:11.552072 kubelet[1409]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:39:11.552072 kubelet[1409]: rm /hostbin/cilium-mount Oct 2 19:39:11.552072 kubelet[1409]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sf6rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:39:11.552318 kubelet[1409]: E1002 19:39:11.552069 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:39:11.842743 kubelet[1409]: E1002 19:39:11.842617 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:12.071570 kubelet[1409]: I1002 19:39:12.071537 1409 scope.go:115] "RemoveContainer" containerID="3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b" Oct 2 19:39:12.071929 kubelet[1409]: I1002 19:39:12.071907 1409 scope.go:115] "RemoveContainer" containerID="3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b" Oct 2 19:39:12.072720 env[1112]: time="2023-10-02T19:39:12.072677560Z" level=info msg="RemoveContainer for \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\"" Oct 2 19:39:12.073038 env[1112]: time="2023-10-02T19:39:12.073006628Z" level=info msg="RemoveContainer for \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\"" Oct 2 19:39:12.073172 env[1112]: time="2023-10-02T19:39:12.073133035Z" level=error msg="RemoveContainer for \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\" failed" error="failed to set removing state for container \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\": container is already in removing state" Oct 2 19:39:12.073328 kubelet[1409]: E1002 19:39:12.073305 1409 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\": container is already in removing state" containerID="3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b" Oct 2 19:39:12.073398 kubelet[1409]: E1002 19:39:12.073344 1409 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b": container is already in removing state; Skipping pod "cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)" Oct 2 19:39:12.073429 kubelet[1409]: E1002 19:39:12.073415 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:12.073675 kubelet[1409]: E1002 19:39:12.073658 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:39:12.075812 env[1112]: time="2023-10-02T19:39:12.075778034Z" level=info msg="RemoveContainer for \"3d65a1b8fc643b312bb6a7dbcaab8f728a482e71604454dc4e4109ab00ffb75b\" returns successfully" Oct 2 19:39:12.377465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981-rootfs.mount: Deactivated successfully. Oct 2 19:39:12.843027 kubelet[1409]: E1002 19:39:12.842895 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:13.844020 kubelet[1409]: E1002 19:39:13.843946 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:14.648462 kubelet[1409]: W1002 19:39:14.648415 1409 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode63a566e_4cf3_47b6_b4e3_c31a6f6fcd6d.slice/cri-containerd-c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981.scope WatchSource:0}: task c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981 not found: not found Oct 2 19:39:14.844341 kubelet[1409]: E1002 19:39:14.844276 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:15.844728 kubelet[1409]: E1002 19:39:15.844666 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:16.845007 kubelet[1409]: E1002 19:39:16.844949 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:17.845795 kubelet[1409]: E1002 19:39:17.845736 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:18.846747 kubelet[1409]: E1002 19:39:18.846701 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:19.847304 kubelet[1409]: E1002 19:39:19.847222 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:20.848132 kubelet[1409]: E1002 19:39:20.848056 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:21.848614 kubelet[1409]: E1002 19:39:21.848573 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:22.849761 kubelet[1409]: E1002 19:39:22.849698 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:23.850230 kubelet[1409]: E1002 19:39:23.850198 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:24.851221 kubelet[1409]: E1002 19:39:24.851181 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:25.851689 kubelet[1409]: E1002 19:39:25.851637 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:25.949579 kubelet[1409]: E1002 19:39:25.949552 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:25.949749 kubelet[1409]: E1002 19:39:25.949735 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:39:26.851854 kubelet[1409]: E1002 19:39:26.851801 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:27.851980 kubelet[1409]: E1002 19:39:27.851946 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:28.853054 kubelet[1409]: E1002 19:39:28.852990 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:29.796455 kubelet[1409]: E1002 19:39:29.796368 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:29.853728 kubelet[1409]: E1002 19:39:29.853643 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:30.854275 kubelet[1409]: E1002 19:39:30.854227 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:31.855052 kubelet[1409]: E1002 19:39:31.855015 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:32.855147 kubelet[1409]: E1002 19:39:32.855109 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:33.856114 kubelet[1409]: E1002 19:39:33.856058 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:34.857008 kubelet[1409]: E1002 19:39:34.856956 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:34.950256 kubelet[1409]: E1002 19:39:34.950199 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:35.857941 kubelet[1409]: E1002 19:39:35.857885 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:36.858526 kubelet[1409]: E1002 19:39:36.858480 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:37.859468 kubelet[1409]: E1002 19:39:37.859389 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:37.949889 kubelet[1409]: E1002 19:39:37.949850 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:37.950123 kubelet[1409]: E1002 19:39:37.950094 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:39:38.860279 kubelet[1409]: E1002 19:39:38.860224 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:39.860985 kubelet[1409]: E1002 19:39:39.860924 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:40.861315 kubelet[1409]: E1002 19:39:40.861251 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:41.861483 kubelet[1409]: E1002 19:39:41.861403 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:42.862385 kubelet[1409]: E1002 19:39:42.862298 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:43.862801 kubelet[1409]: E1002 19:39:43.862763 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:44.863797 kubelet[1409]: E1002 19:39:44.863747 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:45.864844 kubelet[1409]: E1002 19:39:45.864792 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:46.865808 kubelet[1409]: E1002 19:39:46.865748 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:47.866295 kubelet[1409]: E1002 19:39:47.866231 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:48.866901 kubelet[1409]: E1002 19:39:48.866845 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:48.950109 kubelet[1409]: E1002 19:39:48.950078 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:39:48.950340 kubelet[1409]: E1002 19:39:48.950322 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:39:49.796710 kubelet[1409]: E1002 19:39:49.796671 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:49.817887 kubelet[1409]: E1002 19:39:49.817859 1409 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:39:49.867653 kubelet[1409]: E1002 19:39:49.867595 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:49.873047 kubelet[1409]: E1002 19:39:49.873030 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:50.868111 kubelet[1409]: E1002 19:39:50.868036 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:51.868744 kubelet[1409]: E1002 19:39:51.868677 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:52.869069 kubelet[1409]: E1002 19:39:52.868999 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:53.870043 kubelet[1409]: E1002 19:39:53.869967 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:54.870122 kubelet[1409]: E1002 19:39:54.870075 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:54.873615 kubelet[1409]: E1002 19:39:54.873599 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:39:55.870953 kubelet[1409]: E1002 19:39:55.870897 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:56.871838 kubelet[1409]: E1002 19:39:56.871779 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:57.872776 kubelet[1409]: E1002 19:39:57.872711 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:58.873618 kubelet[1409]: E1002 19:39:58.873541 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:59.874092 kubelet[1409]: E1002 19:39:59.874050 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:39:59.874502 kubelet[1409]: E1002 19:39:59.874289 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:00.874356 kubelet[1409]: E1002 19:40:00.874294 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:01.874888 kubelet[1409]: E1002 19:40:01.874827 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:02.875324 kubelet[1409]: E1002 19:40:02.875274 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:02.949141 kubelet[1409]: E1002 19:40:02.949107 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:02.951102 env[1112]: time="2023-10-02T19:40:02.951068504Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:40:02.962751 env[1112]: time="2023-10-02T19:40:02.962687401Z" level=info msg="CreateContainer within sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3\"" Oct 2 19:40:02.963095 env[1112]: time="2023-10-02T19:40:02.963069476Z" level=info msg="StartContainer for \"11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3\"" Oct 2 19:40:02.978051 systemd[1]: Started cri-containerd-11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3.scope. Oct 2 19:40:02.984558 systemd[1]: cri-containerd-11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3.scope: Deactivated successfully. Oct 2 19:40:02.984811 systemd[1]: Stopped cri-containerd-11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3.scope. Oct 2 19:40:02.992657 env[1112]: time="2023-10-02T19:40:02.992591971Z" level=info msg="shim disconnected" id=11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3 Oct 2 19:40:02.992657 env[1112]: time="2023-10-02T19:40:02.992650992Z" level=warning msg="cleaning up after shim disconnected" id=11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3 namespace=k8s.io Oct 2 19:40:02.992657 env[1112]: time="2023-10-02T19:40:02.992659660Z" level=info msg="cleaning up dead shim" Oct 2 19:40:03.002326 env[1112]: time="2023-10-02T19:40:03.002286042Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:40:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1915 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:40:03Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:40:03.002566 env[1112]: time="2023-10-02T19:40:03.002510869Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:40:03.002738 env[1112]: time="2023-10-02T19:40:03.002685441Z" level=error msg="Failed to pipe stdout of container \"11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3\"" error="reading from a closed fifo" Oct 2 19:40:03.004269 env[1112]: time="2023-10-02T19:40:03.004226286Z" level=error msg="Failed to pipe stderr of container \"11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3\"" error="reading from a closed fifo" Oct 2 19:40:03.006493 env[1112]: time="2023-10-02T19:40:03.006451468Z" level=error msg="StartContainer for \"11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:40:03.006692 kubelet[1409]: E1002 19:40:03.006662 1409 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3" Oct 2 19:40:03.006785 kubelet[1409]: E1002 19:40:03.006770 1409 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:40:03.006785 kubelet[1409]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:40:03.006785 kubelet[1409]: rm /hostbin/cilium-mount Oct 2 19:40:03.006785 kubelet[1409]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sf6rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:40:03.006914 kubelet[1409]: E1002 19:40:03.006810 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:40:03.143800 kubelet[1409]: I1002 19:40:03.143127 1409 scope.go:115] "RemoveContainer" containerID="c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981" Oct 2 19:40:03.143800 kubelet[1409]: I1002 19:40:03.143459 1409 scope.go:115] "RemoveContainer" containerID="c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981" Oct 2 19:40:03.144203 env[1112]: time="2023-10-02T19:40:03.144160930Z" level=info msg="RemoveContainer for \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\"" Oct 2 19:40:03.144267 env[1112]: time="2023-10-02T19:40:03.144235622Z" level=info msg="RemoveContainer for \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\"" Oct 2 19:40:03.147043 env[1112]: time="2023-10-02T19:40:03.146972437Z" level=error msg="RemoveContainer for \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\" failed" error="failed to set removing state for container \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\": container is already in removing state" Oct 2 19:40:03.147229 kubelet[1409]: E1002 19:40:03.147185 1409 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\": container is already in removing state" containerID="c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981" Oct 2 19:40:03.147229 kubelet[1409]: I1002 19:40:03.147220 1409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981} err="rpc error: code = Unknown desc = failed to set removing state for container \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\": container is already in removing state" Oct 2 19:40:03.149413 env[1112]: time="2023-10-02T19:40:03.149379987Z" level=info msg="RemoveContainer for \"c073bfecb7232d2138cfe53917032d24b79c2dd3f612785bed2bd3042ea70981\" returns successfully" Oct 2 19:40:03.149648 kubelet[1409]: E1002 19:40:03.149620 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:03.149869 kubelet[1409]: E1002 19:40:03.149854 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:40:03.875647 kubelet[1409]: E1002 19:40:03.875534 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:03.958791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3-rootfs.mount: Deactivated successfully. Oct 2 19:40:04.875110 kubelet[1409]: E1002 19:40:04.875076 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:04.876207 kubelet[1409]: E1002 19:40:04.876188 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:05.876471 kubelet[1409]: E1002 19:40:05.876407 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:06.098089 kubelet[1409]: W1002 19:40:06.098029 1409 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode63a566e_4cf3_47b6_b4e3_c31a6f6fcd6d.slice/cri-containerd-11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3.scope WatchSource:0}: task 11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3 not found: not found Oct 2 19:40:06.877429 kubelet[1409]: E1002 19:40:06.877362 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:07.878447 kubelet[1409]: E1002 19:40:07.878385 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:08.878682 kubelet[1409]: E1002 19:40:08.878626 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:09.796500 kubelet[1409]: E1002 19:40:09.796454 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:09.875968 kubelet[1409]: E1002 19:40:09.875943 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:09.879105 kubelet[1409]: E1002 19:40:09.879088 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:10.879260 kubelet[1409]: E1002 19:40:10.879205 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:11.879920 kubelet[1409]: E1002 19:40:11.879852 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:12.880305 kubelet[1409]: E1002 19:40:12.880231 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:13.880918 kubelet[1409]: E1002 19:40:13.880854 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:14.876909 kubelet[1409]: E1002 19:40:14.876869 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:14.881001 kubelet[1409]: E1002 19:40:14.880979 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:15.881480 kubelet[1409]: E1002 19:40:15.881425 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:15.950066 kubelet[1409]: E1002 19:40:15.950037 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:15.950773 kubelet[1409]: E1002 19:40:15.950756 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:40:16.882028 kubelet[1409]: E1002 19:40:16.881973 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:17.882929 kubelet[1409]: E1002 19:40:17.882864 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:18.883796 kubelet[1409]: E1002 19:40:18.883745 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:19.877923 kubelet[1409]: E1002 19:40:19.877891 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:19.884089 kubelet[1409]: E1002 19:40:19.884050 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:20.884755 kubelet[1409]: E1002 19:40:20.884689 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:21.885131 kubelet[1409]: E1002 19:40:21.885047 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:22.885666 kubelet[1409]: E1002 19:40:22.885576 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:23.886223 kubelet[1409]: E1002 19:40:23.886151 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:24.879298 kubelet[1409]: E1002 19:40:24.879266 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:24.886495 kubelet[1409]: E1002 19:40:24.886469 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:25.886840 kubelet[1409]: E1002 19:40:25.886780 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:26.887181 kubelet[1409]: E1002 19:40:26.887110 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:27.888299 kubelet[1409]: E1002 19:40:27.888233 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:28.888387 kubelet[1409]: E1002 19:40:28.888328 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:29.796782 kubelet[1409]: E1002 19:40:29.796739 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:29.880545 kubelet[1409]: E1002 19:40:29.880505 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:29.888840 kubelet[1409]: E1002 19:40:29.888821 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:29.949759 kubelet[1409]: E1002 19:40:29.949716 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:29.949976 kubelet[1409]: E1002 19:40:29.949962 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:40:30.889193 kubelet[1409]: E1002 19:40:30.889139 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:31.889867 kubelet[1409]: E1002 19:40:31.889827 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:32.890827 kubelet[1409]: E1002 19:40:32.890772 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:33.891583 kubelet[1409]: E1002 19:40:33.891534 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:34.881357 kubelet[1409]: E1002 19:40:34.881327 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:34.892493 kubelet[1409]: E1002 19:40:34.892458 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:35.893179 kubelet[1409]: E1002 19:40:35.893133 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:36.893863 kubelet[1409]: E1002 19:40:36.893816 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:37.894810 kubelet[1409]: E1002 19:40:37.894752 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:38.895118 kubelet[1409]: E1002 19:40:38.895065 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:39.882584 kubelet[1409]: E1002 19:40:39.882555 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:39.895786 kubelet[1409]: E1002 19:40:39.895755 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:40.896204 kubelet[1409]: E1002 19:40:40.896128 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:41.896886 kubelet[1409]: E1002 19:40:41.896829 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:42.897434 kubelet[1409]: E1002 19:40:42.897369 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:42.949316 kubelet[1409]: E1002 19:40:42.949276 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:42.949538 kubelet[1409]: E1002 19:40:42.949517 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:40:43.897553 kubelet[1409]: E1002 19:40:43.897509 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:44.883934 kubelet[1409]: E1002 19:40:44.883902 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:44.898132 kubelet[1409]: E1002 19:40:44.898095 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:45.898760 kubelet[1409]: E1002 19:40:45.898698 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:46.899785 kubelet[1409]: E1002 19:40:46.899738 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:47.900303 kubelet[1409]: E1002 19:40:47.900245 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:48.900847 kubelet[1409]: E1002 19:40:48.900776 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:48.949421 kubelet[1409]: E1002 19:40:48.949384 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:49.796118 kubelet[1409]: E1002 19:40:49.796058 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:49.884728 kubelet[1409]: E1002 19:40:49.884704 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:49.900927 kubelet[1409]: E1002 19:40:49.900906 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:50.901997 kubelet[1409]: E1002 19:40:50.901941 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:51.902455 kubelet[1409]: E1002 19:40:51.902401 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:52.902670 kubelet[1409]: E1002 19:40:52.902614 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:53.903790 kubelet[1409]: E1002 19:40:53.903748 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:54.885293 kubelet[1409]: E1002 19:40:54.885263 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:54.904463 kubelet[1409]: E1002 19:40:54.904422 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:55.904588 kubelet[1409]: E1002 19:40:55.904521 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:55.949582 kubelet[1409]: E1002 19:40:55.949549 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:40:55.949759 kubelet[1409]: E1002 19:40:55.949748 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:40:56.904931 kubelet[1409]: E1002 19:40:56.904874 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:57.905710 kubelet[1409]: E1002 19:40:57.905645 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:58.906030 kubelet[1409]: E1002 19:40:58.905966 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:40:59.885940 kubelet[1409]: E1002 19:40:59.885907 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:40:59.906135 kubelet[1409]: E1002 19:40:59.906090 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:00.906768 kubelet[1409]: E1002 19:41:00.906708 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:01.907380 kubelet[1409]: E1002 19:41:01.907320 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:02.908061 kubelet[1409]: E1002 19:41:02.907994 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:03.908963 kubelet[1409]: E1002 19:41:03.908917 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:04.886788 kubelet[1409]: E1002 19:41:04.886760 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:04.910090 kubelet[1409]: E1002 19:41:04.910041 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:05.911054 kubelet[1409]: E1002 19:41:05.910999 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:06.912908 kubelet[1409]: E1002 19:41:06.911684 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:07.913901 kubelet[1409]: E1002 19:41:07.913851 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:07.949890 kubelet[1409]: E1002 19:41:07.949545 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:07.951028 kubelet[1409]: E1002 19:41:07.951012 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ml547_kube-system(e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d)\"" pod="kube-system/cilium-ml547" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d Oct 2 19:41:08.589754 env[1112]: time="2023-10-02T19:41:08.589713889Z" level=info msg="StopPodSandbox for \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\"" Oct 2 19:41:08.590201 env[1112]: time="2023-10-02T19:41:08.589774634Z" level=info msg="Container to stop \"11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:41:08.591013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b-shm.mount: Deactivated successfully. Oct 2 19:41:08.595258 systemd[1]: cri-containerd-7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b.scope: Deactivated successfully. Oct 2 19:41:08.594000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:41:08.605085 kernel: kauditd_printk_skb: 279 callbacks suppressed Oct 2 19:41:08.605222 kernel: audit: type=1334 audit(1696275668.594:663): prog-id=70 op=UNLOAD Oct 2 19:41:08.608000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:41:08.611252 kernel: audit: type=1334 audit(1696275668.608:664): prog-id=74 op=UNLOAD Oct 2 19:41:08.612148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b-rootfs.mount: Deactivated successfully. Oct 2 19:41:08.650050 env[1112]: time="2023-10-02T19:41:08.649993150Z" level=info msg="shim disconnected" id=7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b Oct 2 19:41:08.650373 env[1112]: time="2023-10-02T19:41:08.650336547Z" level=warning msg="cleaning up after shim disconnected" id=7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b namespace=k8s.io Oct 2 19:41:08.650373 env[1112]: time="2023-10-02T19:41:08.650357467Z" level=info msg="cleaning up dead shim" Oct 2 19:41:08.656410 env[1112]: time="2023-10-02T19:41:08.656360139Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:41:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1951 runtime=io.containerd.runc.v2\n" Oct 2 19:41:08.656672 env[1112]: time="2023-10-02T19:41:08.656649615Z" level=info msg="TearDown network for sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" successfully" Oct 2 19:41:08.656730 env[1112]: time="2023-10-02T19:41:08.656672088Z" level=info msg="StopPodSandbox for \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" returns successfully" Oct 2 19:41:08.696662 kubelet[1409]: I1002 19:41:08.696599 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-lib-modules\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.696662 kubelet[1409]: I1002 19:41:08.696654 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-cgroup\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.696888 kubelet[1409]: I1002 19:41:08.696687 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-host-proc-sys-kernel\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.696888 kubelet[1409]: I1002 19:41:08.696720 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf6rr\" (UniqueName: \"kubernetes.io/projected/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-kube-api-access-sf6rr\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.696888 kubelet[1409]: I1002 19:41:08.696728 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.696888 kubelet[1409]: I1002 19:41:08.696734 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.696888 kubelet[1409]: I1002 19:41:08.696743 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-etc-cni-netd\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697021 kubelet[1409]: I1002 19:41:08.696759 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.697021 kubelet[1409]: I1002 19:41:08.696778 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-xtables-lock\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697021 kubelet[1409]: I1002 19:41:08.696778 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.697021 kubelet[1409]: I1002 19:41:08.696799 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.697021 kubelet[1409]: I1002 19:41:08.696808 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cni-path\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697141 kubelet[1409]: I1002 19:41:08.696824 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-host-proc-sys-net\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697141 kubelet[1409]: I1002 19:41:08.696836 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cni-path" (OuterVolumeSpecName: "cni-path") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.697141 kubelet[1409]: I1002 19:41:08.696851 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-config-path\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697141 kubelet[1409]: I1002 19:41:08.696876 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-hostproc\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697141 kubelet[1409]: I1002 19:41:08.696875 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.697284 kubelet[1409]: I1002 19:41:08.696891 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-run\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697284 kubelet[1409]: I1002 19:41:08.696909 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-clustermesh-secrets\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697284 kubelet[1409]: I1002 19:41:08.696925 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-bpf-maps\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697284 kubelet[1409]: I1002 19:41:08.696953 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-hubble-tls\") pod \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\" (UID: \"e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d\") " Oct 2 19:41:08.697284 kubelet[1409]: I1002 19:41:08.696974 1409 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-xtables-lock\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.697284 kubelet[1409]: I1002 19:41:08.696982 1409 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-cgroup\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.697284 kubelet[1409]: I1002 19:41:08.696990 1409 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-host-proc-sys-kernel\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.697454 kubelet[1409]: I1002 19:41:08.696999 1409 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-etc-cni-netd\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.697454 kubelet[1409]: I1002 19:41:08.697007 1409 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cni-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.697454 kubelet[1409]: W1002 19:41:08.697003 1409 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:41:08.697454 kubelet[1409]: I1002 19:41:08.697187 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-hostproc" (OuterVolumeSpecName: "hostproc") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.697454 kubelet[1409]: I1002 19:41:08.697206 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.697454 kubelet[1409]: I1002 19:41:08.697371 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:41:08.697652 kubelet[1409]: I1002 19:41:08.697027 1409 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-host-proc-sys-net\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.697652 kubelet[1409]: I1002 19:41:08.697406 1409 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-lib-modules\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.699703 kubelet[1409]: I1002 19:41:08.699180 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:41:08.700505 kubelet[1409]: I1002 19:41:08.699932 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:41:08.700505 kubelet[1409]: I1002 19:41:08.700080 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:41:08.700505 kubelet[1409]: I1002 19:41:08.700414 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-kube-api-access-sf6rr" (OuterVolumeSpecName: "kube-api-access-sf6rr") pod "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" (UID: "e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d"). InnerVolumeSpecName "kube-api-access-sf6rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:41:08.700520 systemd[1]: var-lib-kubelet-pods-e63a566e\x2d4cf3\x2d47b6\x2db4e3\x2dc31a6f6fcd6d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:41:08.702017 systemd[1]: var-lib-kubelet-pods-e63a566e\x2d4cf3\x2d47b6\x2db4e3\x2dc31a6f6fcd6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsf6rr.mount: Deactivated successfully. Oct 2 19:41:08.702083 systemd[1]: var-lib-kubelet-pods-e63a566e\x2d4cf3\x2d47b6\x2db4e3\x2dc31a6f6fcd6d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:41:08.798060 kubelet[1409]: I1002 19:41:08.798007 1409 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-sf6rr\" (UniqueName: \"kubernetes.io/projected/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-kube-api-access-sf6rr\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.798060 kubelet[1409]: I1002 19:41:08.798045 1409 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.798060 kubelet[1409]: I1002 19:41:08.798058 1409 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-hostproc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.798060 kubelet[1409]: I1002 19:41:08.798071 1409 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-cilium-run\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.798060 kubelet[1409]: I1002 19:41:08.798081 1409 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-clustermesh-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.798359 kubelet[1409]: I1002 19:41:08.798090 1409 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-bpf-maps\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.798359 kubelet[1409]: I1002 19:41:08.798116 1409 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d-hubble-tls\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:41:08.914661 kubelet[1409]: E1002 19:41:08.914597 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:09.237851 kubelet[1409]: I1002 19:41:09.237748 1409 scope.go:115] "RemoveContainer" containerID="11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3" Oct 2 19:41:09.239006 env[1112]: time="2023-10-02T19:41:09.238967877Z" level=info msg="RemoveContainer for \"11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3\"" Oct 2 19:41:09.241182 systemd[1]: Removed slice kubepods-burstable-pode63a566e_4cf3_47b6_b4e3_c31a6f6fcd6d.slice. Oct 2 19:41:09.272094 env[1112]: time="2023-10-02T19:41:09.272054574Z" level=info msg="RemoveContainer for \"11765de8f7ac5bfbd5248fce89d6fe1534e376783708140889efd873d60862e3\" returns successfully" Oct 2 19:41:09.795875 kubelet[1409]: E1002 19:41:09.795824 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:09.887693 kubelet[1409]: E1002 19:41:09.887664 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:09.915054 kubelet[1409]: E1002 19:41:09.915015 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:09.950623 env[1112]: time="2023-10-02T19:41:09.950579003Z" level=info msg="StopPodSandbox for \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\"" Oct 2 19:41:09.950843 env[1112]: time="2023-10-02T19:41:09.950675264Z" level=info msg="TearDown network for sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" successfully" Oct 2 19:41:09.950843 env[1112]: time="2023-10-02T19:41:09.950711834Z" level=info msg="StopPodSandbox for \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" returns successfully" Oct 2 19:41:09.951434 kubelet[1409]: I1002 19:41:09.951414 1409 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d path="/var/lib/kubelet/pods/e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d/volumes" Oct 2 19:41:10.915188 kubelet[1409]: E1002 19:41:10.915136 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:11.028689 kubelet[1409]: I1002 19:41:11.028649 1409 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:41:11.028689 kubelet[1409]: E1002 19:41:11.028702 1409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.028935 kubelet[1409]: E1002 19:41:11.028712 1409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.028935 kubelet[1409]: E1002 19:41:11.028720 1409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.028935 kubelet[1409]: I1002 19:41:11.028736 1409 memory_manager.go:346] "RemoveStaleState removing state" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.028935 kubelet[1409]: I1002 19:41:11.028743 1409 memory_manager.go:346] "RemoveStaleState removing state" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.028935 kubelet[1409]: I1002 19:41:11.028750 1409 memory_manager.go:346] "RemoveStaleState removing state" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.028935 kubelet[1409]: I1002 19:41:11.028757 1409 memory_manager.go:346] "RemoveStaleState removing state" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.028935 kubelet[1409]: I1002 19:41:11.028764 1409 memory_manager.go:346] "RemoveStaleState removing state" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.029874 kubelet[1409]: I1002 19:41:11.029853 1409 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:41:11.029944 kubelet[1409]: E1002 19:41:11.029891 1409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.029944 kubelet[1409]: E1002 19:41:11.029904 1409 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e63a566e-4cf3-47b6-b4e3-c31a6f6fcd6d" containerName="mount-cgroup" Oct 2 19:41:11.033875 systemd[1]: Created slice kubepods-besteffort-pod95512a34_41cc_46b7_b757_f341f392733a.slice. Oct 2 19:41:11.037969 systemd[1]: Created slice kubepods-burstable-pod5ad2f08a_c72c_477d_9345_2a9238e54ab9.slice. Oct 2 19:41:11.111185 kubelet[1409]: I1002 19:41:11.111138 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-run\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111333 kubelet[1409]: I1002 19:41:11.111207 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-etc-cni-netd\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111333 kubelet[1409]: I1002 19:41:11.111228 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ad2f08a-c72c-477d-9345-2a9238e54ab9-clustermesh-secrets\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111333 kubelet[1409]: I1002 19:41:11.111246 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-config-path\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111333 kubelet[1409]: I1002 19:41:11.111262 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-ipsec-secrets\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111333 kubelet[1409]: I1002 19:41:11.111294 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjbxh\" (UniqueName: \"kubernetes.io/projected/5ad2f08a-c72c-477d-9345-2a9238e54ab9-kube-api-access-fjbxh\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111476 kubelet[1409]: I1002 19:41:11.111340 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-hostproc\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111476 kubelet[1409]: I1002 19:41:11.111362 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-cgroup\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111476 kubelet[1409]: I1002 19:41:11.111378 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-lib-modules\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111476 kubelet[1409]: I1002 19:41:11.111410 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ad2f08a-c72c-477d-9345-2a9238e54ab9-hubble-tls\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111476 kubelet[1409]: I1002 19:41:11.111436 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkw9l\" (UniqueName: \"kubernetes.io/projected/95512a34-41cc-46b7-b757-f341f392733a-kube-api-access-gkw9l\") pod \"cilium-operator-f59cbd8c6-g2lnz\" (UID: \"95512a34-41cc-46b7-b757-f341f392733a\") " pod="kube-system/cilium-operator-f59cbd8c6-g2lnz" Oct 2 19:41:11.111476 kubelet[1409]: I1002 19:41:11.111453 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cni-path\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111609 kubelet[1409]: I1002 19:41:11.111471 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-xtables-lock\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111609 kubelet[1409]: I1002 19:41:11.111522 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-host-proc-sys-kernel\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111609 kubelet[1409]: I1002 19:41:11.111546 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-bpf-maps\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.111609 kubelet[1409]: I1002 19:41:11.111579 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95512a34-41cc-46b7-b757-f341f392733a-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-g2lnz\" (UID: \"95512a34-41cc-46b7-b757-f341f392733a\") " pod="kube-system/cilium-operator-f59cbd8c6-g2lnz" Oct 2 19:41:11.111609 kubelet[1409]: I1002 19:41:11.111595 1409 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-host-proc-sys-net\") pod \"cilium-cqcsd\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " pod="kube-system/cilium-cqcsd" Oct 2 19:41:11.336930 kubelet[1409]: E1002 19:41:11.336259 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:11.337063 env[1112]: time="2023-10-02T19:41:11.336809633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-g2lnz,Uid:95512a34-41cc-46b7-b757-f341f392733a,Namespace:kube-system,Attempt:0,}" Oct 2 19:41:11.347403 kubelet[1409]: E1002 19:41:11.347359 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:11.347813 env[1112]: time="2023-10-02T19:41:11.347761208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cqcsd,Uid:5ad2f08a-c72c-477d-9345-2a9238e54ab9,Namespace:kube-system,Attempt:0,}" Oct 2 19:41:11.350666 env[1112]: time="2023-10-02T19:41:11.350602968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:41:11.350666 env[1112]: time="2023-10-02T19:41:11.350643725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:41:11.350666 env[1112]: time="2023-10-02T19:41:11.350654334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:41:11.350889 env[1112]: time="2023-10-02T19:41:11.350839714Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827 pid=1980 runtime=io.containerd.runc.v2 Oct 2 19:41:11.361612 env[1112]: time="2023-10-02T19:41:11.361467318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:41:11.361612 env[1112]: time="2023-10-02T19:41:11.361499119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:41:11.361612 env[1112]: time="2023-10-02T19:41:11.361508496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:41:11.362287 env[1112]: time="2023-10-02T19:41:11.361730254Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54 pid=2003 runtime=io.containerd.runc.v2 Oct 2 19:41:11.365639 systemd[1]: Started cri-containerd-281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827.scope. Oct 2 19:41:11.376356 systemd[1]: Started cri-containerd-a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54.scope. Oct 2 19:41:11.385199 kernel: audit: type=1400 audit(1696275671.379:665): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.385305 kernel: audit: type=1400 audit(1696275671.379:666): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.385321 kernel: audit: type=1400 audit(1696275671.379:667): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388842 kernel: audit: type=1400 audit(1696275671.379:668): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.390607 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:41:11.390638 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:41:11.390654 kernel: audit: type=1400 audit(1696275671.379:669): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.390669 kernel: audit: backlog limit exceeded Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.379000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.380000 audit: BPF prog-id=78 op=LOAD Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1980 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313835346130396239323133376264323663666631626166636639 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1980 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313835346130396239323133376264323663666631626166636639 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.384000 audit: BPF prog-id=79 op=LOAD Oct 2 19:41:11.384000 audit[1990]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00021d130 items=0 ppid=1980 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313835346130396239323133376264323663666631626166636639 Oct 2 19:41:11.386000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.386000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.386000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.386000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.386000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.386000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.386000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.386000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.386000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.386000 audit: BPF prog-id=80 op=LOAD Oct 2 19:41:11.386000 audit[1990]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c00021d178 items=0 ppid=1980 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313835346130396239323133376264323663666631626166636639 Oct 2 19:41:11.388000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:41:11.388000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { perfmon } for pid=1990 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1990]: AVC avc: denied { bpf } for pid=1990 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit: BPF prog-id=81 op=LOAD Oct 2 19:41:11.388000 audit[1990]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00021d588 items=0 ppid=1980 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238313835346130396239323133376264323663666631626166636639 Oct 2 19:41:11.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit: BPF prog-id=82 op=LOAD Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2003 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130363434303135323031636239326330623931623332636132663563 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2003 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130363434303135323031636239326330623931623332636132663563 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit: BPF prog-id=83 op=LOAD Oct 2 19:41:11.392000 audit[2017]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0000a19a0 items=0 ppid=2003 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130363434303135323031636239326330623931623332636132663563 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit: BPF prog-id=84 op=LOAD Oct 2 19:41:11.392000 audit[2017]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0000a19e8 items=0 ppid=2003 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130363434303135323031636239326330623931623332636132663563 Oct 2 19:41:11.392000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:41:11.392000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { perfmon } for pid=2017 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit[2017]: AVC avc: denied { bpf } for pid=2017 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:11.392000 audit: BPF prog-id=85 op=LOAD Oct 2 19:41:11.392000 audit[2017]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0000a1df8 items=0 ppid=2003 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:11.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130363434303135323031636239326330623931623332636132663563 Oct 2 19:41:11.402194 env[1112]: time="2023-10-02T19:41:11.402113771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cqcsd,Uid:5ad2f08a-c72c-477d-9345-2a9238e54ab9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\"" Oct 2 19:41:11.403381 kubelet[1409]: E1002 19:41:11.402952 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:11.405055 env[1112]: time="2023-10-02T19:41:11.405028768Z" level=info msg="CreateContainer within sandbox \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:41:11.415901 env[1112]: time="2023-10-02T19:41:11.415838476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-g2lnz,Uid:95512a34-41cc-46b7-b757-f341f392733a,Namespace:kube-system,Attempt:0,} returns sandbox id \"281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827\"" Oct 2 19:41:11.416451 kubelet[1409]: E1002 19:41:11.416433 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:11.417220 env[1112]: time="2023-10-02T19:41:11.417144479Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:41:11.423268 env[1112]: time="2023-10-02T19:41:11.423045970Z" level=info msg="CreateContainer within sandbox \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\"" Oct 2 19:41:11.423418 env[1112]: time="2023-10-02T19:41:11.423390579Z" level=info msg="StartContainer for \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\"" Oct 2 19:41:11.435602 systemd[1]: Started cri-containerd-1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469.scope. Oct 2 19:41:11.446070 systemd[1]: cri-containerd-1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469.scope: Deactivated successfully. Oct 2 19:41:11.446318 systemd[1]: Stopped cri-containerd-1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469.scope. Oct 2 19:41:11.460617 env[1112]: time="2023-10-02T19:41:11.460558300Z" level=info msg="shim disconnected" id=1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469 Oct 2 19:41:11.460617 env[1112]: time="2023-10-02T19:41:11.460617993Z" level=warning msg="cleaning up after shim disconnected" id=1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469 namespace=k8s.io Oct 2 19:41:11.460617 env[1112]: time="2023-10-02T19:41:11.460627300Z" level=info msg="cleaning up dead shim" Oct 2 19:41:11.466618 env[1112]: time="2023-10-02T19:41:11.466574798Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:41:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2079 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:41:11Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:41:11.466864 env[1112]: time="2023-10-02T19:41:11.466810031Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:41:11.467027 env[1112]: time="2023-10-02T19:41:11.466969382Z" level=error msg="Failed to pipe stdout of container \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\"" error="reading from a closed fifo" Oct 2 19:41:11.467027 env[1112]: time="2023-10-02T19:41:11.466995882Z" level=error msg="Failed to pipe stderr of container \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\"" error="reading from a closed fifo" Oct 2 19:41:11.469486 env[1112]: time="2023-10-02T19:41:11.469447155Z" level=error msg="StartContainer for \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:41:11.469638 kubelet[1409]: E1002 19:41:11.469617 1409 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469" Oct 2 19:41:11.469742 kubelet[1409]: E1002 19:41:11.469723 1409 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:41:11.469742 kubelet[1409]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:41:11.469742 kubelet[1409]: rm /hostbin/cilium-mount Oct 2 19:41:11.469742 kubelet[1409]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fjbxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:41:11.469899 kubelet[1409]: E1002 19:41:11.469770 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:41:11.915610 kubelet[1409]: E1002 19:41:11.915561 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:12.244726 kubelet[1409]: E1002 19:41:12.244634 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:12.246200 env[1112]: time="2023-10-02T19:41:12.246149912Z" level=info msg="CreateContainer within sandbox \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:41:12.260021 env[1112]: time="2023-10-02T19:41:12.259974305Z" level=info msg="CreateContainer within sandbox \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\"" Oct 2 19:41:12.260438 env[1112]: time="2023-10-02T19:41:12.260412471Z" level=info msg="StartContainer for \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\"" Oct 2 19:41:12.274554 systemd[1]: Started cri-containerd-e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226.scope. Oct 2 19:41:12.281852 systemd[1]: cri-containerd-e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226.scope: Deactivated successfully. Oct 2 19:41:12.282113 systemd[1]: Stopped cri-containerd-e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226.scope. Oct 2 19:41:12.292352 env[1112]: time="2023-10-02T19:41:12.292304480Z" level=info msg="shim disconnected" id=e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226 Oct 2 19:41:12.292473 env[1112]: time="2023-10-02T19:41:12.292352942Z" level=warning msg="cleaning up after shim disconnected" id=e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226 namespace=k8s.io Oct 2 19:41:12.292473 env[1112]: time="2023-10-02T19:41:12.292368942Z" level=info msg="cleaning up dead shim" Oct 2 19:41:12.298084 env[1112]: time="2023-10-02T19:41:12.298047330Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:41:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2115 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:41:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:41:12.298340 env[1112]: time="2023-10-02T19:41:12.298289037Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Oct 2 19:41:12.298504 env[1112]: time="2023-10-02T19:41:12.298448728Z" level=error msg="Failed to pipe stdout of container \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\"" error="reading from a closed fifo" Oct 2 19:41:12.298650 env[1112]: time="2023-10-02T19:41:12.298465019Z" level=error msg="Failed to pipe stderr of container \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\"" error="reading from a closed fifo" Oct 2 19:41:12.300475 env[1112]: time="2023-10-02T19:41:12.300435906Z" level=error msg="StartContainer for \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:41:12.300634 kubelet[1409]: E1002 19:41:12.300605 1409 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226" Oct 2 19:41:12.300709 kubelet[1409]: E1002 19:41:12.300696 1409 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:41:12.300709 kubelet[1409]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:41:12.300709 kubelet[1409]: rm /hostbin/cilium-mount Oct 2 19:41:12.300709 kubelet[1409]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fjbxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:41:12.300845 kubelet[1409]: E1002 19:41:12.300727 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:41:12.916621 kubelet[1409]: E1002 19:41:12.916564 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:13.215649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226-rootfs.mount: Deactivated successfully. Oct 2 19:41:13.247766 kubelet[1409]: I1002 19:41:13.247728 1409 scope.go:115] "RemoveContainer" containerID="1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469" Oct 2 19:41:13.248080 kubelet[1409]: I1002 19:41:13.248054 1409 scope.go:115] "RemoveContainer" containerID="1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469" Oct 2 19:41:13.249194 env[1112]: time="2023-10-02T19:41:13.249134801Z" level=info msg="RemoveContainer for \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\"" Oct 2 19:41:13.249527 env[1112]: time="2023-10-02T19:41:13.249490092Z" level=info msg="RemoveContainer for \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\"" Oct 2 19:41:13.249639 env[1112]: time="2023-10-02T19:41:13.249590451Z" level=error msg="RemoveContainer for \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\" failed" error="failed to set removing state for container \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\": container is already in removing state" Oct 2 19:41:13.249740 kubelet[1409]: E1002 19:41:13.249723 1409 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\": container is already in removing state" containerID="1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469" Oct 2 19:41:13.249829 kubelet[1409]: E1002 19:41:13.249750 1409 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469": container is already in removing state; Skipping pod "cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9)" Oct 2 19:41:13.249829 kubelet[1409]: E1002 19:41:13.249809 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:13.250026 kubelet[1409]: E1002 19:41:13.250013 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9)\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:41:13.256659 env[1112]: time="2023-10-02T19:41:13.256607785Z" level=info msg="RemoveContainer for \"1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469\" returns successfully" Oct 2 19:41:13.297389 env[1112]: time="2023-10-02T19:41:13.297338642Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:41:13.298981 env[1112]: time="2023-10-02T19:41:13.298951384Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:41:13.300401 env[1112]: time="2023-10-02T19:41:13.300370219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:41:13.300746 env[1112]: time="2023-10-02T19:41:13.300709951Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 2 19:41:13.302140 env[1112]: time="2023-10-02T19:41:13.302113858Z" level=info msg="CreateContainer within sandbox \"281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:41:13.314410 env[1112]: time="2023-10-02T19:41:13.314360775Z" level=info msg="CreateContainer within sandbox \"281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\"" Oct 2 19:41:13.314831 env[1112]: time="2023-10-02T19:41:13.314788422Z" level=info msg="StartContainer for \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\"" Oct 2 19:41:13.332464 systemd[1]: Started cri-containerd-fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5.scope. Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit: BPF prog-id=86 op=LOAD Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1980 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:13.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663264383935306134313632663530313763376661343937333237 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1980 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:13.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663264383935306134313632663530313763376661343937333237 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.349000 audit: BPF prog-id=87 op=LOAD Oct 2 19:41:13.349000 audit[2134]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0003129b0 items=0 ppid=1980 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:13.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663264383935306134313632663530313763376661343937333237 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit: BPF prog-id=88 op=LOAD Oct 2 19:41:13.350000 audit[2134]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0003129f8 items=0 ppid=1980 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:13.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663264383935306134313632663530313763376661343937333237 Oct 2 19:41:13.350000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:41:13.350000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { perfmon } for pid=2134 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit[2134]: AVC avc: denied { bpf } for pid=2134 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:41:13.350000 audit: BPF prog-id=89 op=LOAD Oct 2 19:41:13.350000 audit[2134]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000312e08 items=0 ppid=1980 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:41:13.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663264383935306134313632663530313763376661343937333237 Oct 2 19:41:13.365492 env[1112]: time="2023-10-02T19:41:13.365439839Z" level=info msg="StartContainer for \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\" returns successfully" Oct 2 19:41:13.384000 audit[2146]: AVC avc: denied { map_create } for pid=2146 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c334,c753 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c334,c753 tclass=bpf permissive=0 Oct 2 19:41:13.384000 audit[2146]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0002f37d0 a2=48 a3=c0002f37c0 items=0 ppid=1980 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c334,c753 key=(null) Oct 2 19:41:13.384000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:41:13.917250 kubelet[1409]: E1002 19:41:13.917210 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:14.250147 kubelet[1409]: E1002 19:41:14.250035 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:14.250313 kubelet[1409]: E1002 19:41:14.250242 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9)\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:41:14.251391 kubelet[1409]: E1002 19:41:14.251369 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:14.268285 kubelet[1409]: I1002 19:41:14.268254 1409 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-g2lnz" podStartSLOduration=-9.223372033586554e+09 pod.CreationTimestamp="2023-10-02 19:41:11 +0000 UTC" firstStartedPulling="2023-10-02 19:41:11.416859782 +0000 UTC m=+202.116933812" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:41:14.268049738 +0000 UTC m=+204.968123768" watchObservedRunningTime="2023-10-02 19:41:14.268222334 +0000 UTC m=+204.968296374" Oct 2 19:41:14.566090 kubelet[1409]: W1002 19:41:14.565982 1409 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ad2f08a_c72c_477d_9345_2a9238e54ab9.slice/cri-containerd-1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469.scope WatchSource:0}: container "1f6b454208802aef445c3243460b93a4af201a25ba6e0c34d31d7ec8b5124469" in namespace "k8s.io": not found Oct 2 19:41:14.888103 kubelet[1409]: E1002 19:41:14.888073 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:14.917685 kubelet[1409]: E1002 19:41:14.917650 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:15.252219 kubelet[1409]: E1002 19:41:15.252094 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:15.918073 kubelet[1409]: E1002 19:41:15.918004 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:16.918914 kubelet[1409]: E1002 19:41:16.918869 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:17.672042 kubelet[1409]: W1002 19:41:17.672000 1409 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ad2f08a_c72c_477d_9345_2a9238e54ab9.slice/cri-containerd-e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226.scope WatchSource:0}: task e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226 not found: not found Oct 2 19:41:17.919303 kubelet[1409]: E1002 19:41:17.919236 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:18.919767 kubelet[1409]: E1002 19:41:18.919716 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:19.888605 kubelet[1409]: E1002 19:41:19.888572 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:19.920815 kubelet[1409]: E1002 19:41:19.920773 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:20.921940 kubelet[1409]: E1002 19:41:20.921883 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:21.922033 kubelet[1409]: E1002 19:41:21.921971 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:22.922740 kubelet[1409]: E1002 19:41:22.922679 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:23.923853 kubelet[1409]: E1002 19:41:23.923809 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:24.889615 kubelet[1409]: E1002 19:41:24.889585 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:24.924823 kubelet[1409]: E1002 19:41:24.924779 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:25.925748 kubelet[1409]: E1002 19:41:25.925687 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:26.926875 kubelet[1409]: E1002 19:41:26.926806 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:27.927195 kubelet[1409]: E1002 19:41:27.927133 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:27.949828 kubelet[1409]: E1002 19:41:27.949799 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:27.951402 env[1112]: time="2023-10-02T19:41:27.951367049Z" level=info msg="CreateContainer within sandbox \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:41:27.962996 env[1112]: time="2023-10-02T19:41:27.962951943Z" level=info msg="CreateContainer within sandbox \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\"" Oct 2 19:41:27.963356 env[1112]: time="2023-10-02T19:41:27.963323663Z" level=info msg="StartContainer for \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\"" Oct 2 19:41:27.978076 systemd[1]: Started cri-containerd-3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504.scope. Oct 2 19:41:27.986722 systemd[1]: cri-containerd-3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504.scope: Deactivated successfully. Oct 2 19:41:27.986928 systemd[1]: Stopped cri-containerd-3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504.scope. Oct 2 19:41:28.184272 env[1112]: time="2023-10-02T19:41:28.184153545Z" level=info msg="shim disconnected" id=3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504 Oct 2 19:41:28.184272 env[1112]: time="2023-10-02T19:41:28.184213427Z" level=warning msg="cleaning up after shim disconnected" id=3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504 namespace=k8s.io Oct 2 19:41:28.184272 env[1112]: time="2023-10-02T19:41:28.184221784Z" level=info msg="cleaning up dead shim" Oct 2 19:41:28.189980 env[1112]: time="2023-10-02T19:41:28.189931288Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:41:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2194 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:41:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:41:28.190214 env[1112]: time="2023-10-02T19:41:28.190153226Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:41:28.190333 env[1112]: time="2023-10-02T19:41:28.190299732Z" level=error msg="Failed to pipe stdout of container \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\"" error="reading from a closed fifo" Oct 2 19:41:28.192259 env[1112]: time="2023-10-02T19:41:28.192219001Z" level=error msg="Failed to pipe stderr of container \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\"" error="reading from a closed fifo" Oct 2 19:41:28.194351 env[1112]: time="2023-10-02T19:41:28.194321184Z" level=error msg="StartContainer for \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:41:28.194561 kubelet[1409]: E1002 19:41:28.194535 1409 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504" Oct 2 19:41:28.194660 kubelet[1409]: E1002 19:41:28.194646 1409 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:41:28.194660 kubelet[1409]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:41:28.194660 kubelet[1409]: rm /hostbin/cilium-mount Oct 2 19:41:28.194660 kubelet[1409]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fjbxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:41:28.194789 kubelet[1409]: E1002 19:41:28.194682 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:41:28.271621 kubelet[1409]: I1002 19:41:28.271592 1409 scope.go:115] "RemoveContainer" containerID="e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226" Oct 2 19:41:28.271852 kubelet[1409]: I1002 19:41:28.271838 1409 scope.go:115] "RemoveContainer" containerID="e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226" Oct 2 19:41:28.272596 env[1112]: time="2023-10-02T19:41:28.272552963Z" level=info msg="RemoveContainer for \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\"" Oct 2 19:41:28.272725 env[1112]: time="2023-10-02T19:41:28.272557482Z" level=info msg="RemoveContainer for \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\"" Oct 2 19:41:28.272761 env[1112]: time="2023-10-02T19:41:28.272737231Z" level=error msg="RemoveContainer for \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\" failed" error="failed to set removing state for container \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\": container is already in removing state" Oct 2 19:41:28.272894 kubelet[1409]: E1002 19:41:28.272876 1409 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\": container is already in removing state" containerID="e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226" Oct 2 19:41:28.272971 kubelet[1409]: I1002 19:41:28.272909 1409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226} err="rpc error: code = Unknown desc = failed to set removing state for container \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\": container is already in removing state" Oct 2 19:41:28.276682 env[1112]: time="2023-10-02T19:41:28.276651822Z" level=info msg="RemoveContainer for \"e20df257667d4b02d7f30f47def0079e32c26281e30459dcf03c9173217ef226\" returns successfully" Oct 2 19:41:28.276864 kubelet[1409]: E1002 19:41:28.276838 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:28.277046 kubelet[1409]: E1002 19:41:28.277032 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9)\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:41:28.927783 kubelet[1409]: E1002 19:41:28.927730 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:28.959840 systemd[1]: run-containerd-runc-k8s.io-3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504-runc.wtq9sD.mount: Deactivated successfully. Oct 2 19:41:28.959924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504-rootfs.mount: Deactivated successfully. Oct 2 19:41:29.796222 kubelet[1409]: E1002 19:41:29.796177 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:29.890591 kubelet[1409]: E1002 19:41:29.890561 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:29.927871 kubelet[1409]: E1002 19:41:29.927842 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:30.928425 kubelet[1409]: E1002 19:41:30.928379 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:31.288336 kubelet[1409]: W1002 19:41:31.288228 1409 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ad2f08a_c72c_477d_9345_2a9238e54ab9.slice/cri-containerd-3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504.scope WatchSource:0}: task 3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504 not found: not found Oct 2 19:41:31.929485 kubelet[1409]: E1002 19:41:31.929426 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:32.929803 kubelet[1409]: E1002 19:41:32.929759 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:33.930498 kubelet[1409]: E1002 19:41:33.930447 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:34.891437 kubelet[1409]: E1002 19:41:34.891400 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:34.930805 kubelet[1409]: E1002 19:41:34.930766 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:35.931107 kubelet[1409]: E1002 19:41:35.931058 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:36.931669 kubelet[1409]: E1002 19:41:36.931608 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:37.932200 kubelet[1409]: E1002 19:41:37.932146 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:38.933275 kubelet[1409]: E1002 19:41:38.933220 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:39.892347 kubelet[1409]: E1002 19:41:39.892311 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:39.933785 kubelet[1409]: E1002 19:41:39.933740 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:39.949627 kubelet[1409]: E1002 19:41:39.949593 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:39.950774 kubelet[1409]: E1002 19:41:39.950598 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9)\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:41:40.934117 kubelet[1409]: E1002 19:41:40.934037 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:41.935135 kubelet[1409]: E1002 19:41:41.935077 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:42.935982 kubelet[1409]: E1002 19:41:42.935896 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:43.937014 kubelet[1409]: E1002 19:41:43.936943 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:44.893425 kubelet[1409]: E1002 19:41:44.893395 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:44.937630 kubelet[1409]: E1002 19:41:44.937594 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:45.938774 kubelet[1409]: E1002 19:41:45.938718 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:46.939510 kubelet[1409]: E1002 19:41:46.939453 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:47.940659 kubelet[1409]: E1002 19:41:47.940571 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:48.941292 kubelet[1409]: E1002 19:41:48.941232 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:49.796650 kubelet[1409]: E1002 19:41:49.796602 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:49.805126 env[1112]: time="2023-10-02T19:41:49.805094170Z" level=info msg="StopPodSandbox for \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\"" Oct 2 19:41:49.805375 env[1112]: time="2023-10-02T19:41:49.805191105Z" level=info msg="TearDown network for sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" successfully" Oct 2 19:41:49.805375 env[1112]: time="2023-10-02T19:41:49.805230100Z" level=info msg="StopPodSandbox for \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" returns successfully" Oct 2 19:41:49.805558 env[1112]: time="2023-10-02T19:41:49.805535765Z" level=info msg="RemovePodSandbox for \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\"" Oct 2 19:41:49.805602 env[1112]: time="2023-10-02T19:41:49.805563529Z" level=info msg="Forcibly stopping sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\"" Oct 2 19:41:49.805633 env[1112]: time="2023-10-02T19:41:49.805618574Z" level=info msg="TearDown network for sandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" successfully" Oct 2 19:41:49.809072 env[1112]: time="2023-10-02T19:41:49.809038190Z" level=info msg="RemovePodSandbox \"7f3e0dd5602e14352c63fc5430787542e05b45ddbd591be64a616c670d3ae36b\" returns successfully" Oct 2 19:41:49.894349 kubelet[1409]: E1002 19:41:49.894326 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:49.941792 kubelet[1409]: E1002 19:41:49.941763 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:50.942372 kubelet[1409]: E1002 19:41:50.942322 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:50.949913 kubelet[1409]: E1002 19:41:50.949893 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:50.951774 env[1112]: time="2023-10-02T19:41:50.951745530Z" level=info msg="CreateContainer within sandbox \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:41:50.961586 env[1112]: time="2023-10-02T19:41:50.961553351Z" level=info msg="CreateContainer within sandbox \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812\"" Oct 2 19:41:50.961887 env[1112]: time="2023-10-02T19:41:50.961866190Z" level=info msg="StartContainer for \"b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812\"" Oct 2 19:41:50.974233 systemd[1]: Started cri-containerd-b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812.scope. Oct 2 19:41:50.982099 systemd[1]: cri-containerd-b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812.scope: Deactivated successfully. Oct 2 19:41:50.982345 systemd[1]: Stopped cri-containerd-b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812.scope. Oct 2 19:41:50.984876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812-rootfs.mount: Deactivated successfully. Oct 2 19:41:50.990099 env[1112]: time="2023-10-02T19:41:50.990061984Z" level=info msg="shim disconnected" id=b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812 Oct 2 19:41:50.990204 env[1112]: time="2023-10-02T19:41:50.990103925Z" level=warning msg="cleaning up after shim disconnected" id=b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812 namespace=k8s.io Oct 2 19:41:50.990204 env[1112]: time="2023-10-02T19:41:50.990111799Z" level=info msg="cleaning up dead shim" Oct 2 19:41:50.995988 env[1112]: time="2023-10-02T19:41:50.995959161Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:41:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2233 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:41:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:41:50.996199 env[1112]: time="2023-10-02T19:41:50.996151990Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:41:50.996423 env[1112]: time="2023-10-02T19:41:50.996388342Z" level=error msg="Failed to pipe stdout of container \"b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812\"" error="reading from a closed fifo" Oct 2 19:41:50.996485 env[1112]: time="2023-10-02T19:41:50.996416466Z" level=error msg="Failed to pipe stderr of container \"b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812\"" error="reading from a closed fifo" Oct 2 19:41:50.998568 env[1112]: time="2023-10-02T19:41:50.998528739Z" level=error msg="StartContainer for \"b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:41:50.998714 kubelet[1409]: E1002 19:41:50.998691 1409 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812" Oct 2 19:41:50.998797 kubelet[1409]: E1002 19:41:50.998786 1409 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:41:50.998797 kubelet[1409]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:41:50.998797 kubelet[1409]: rm /hostbin/cilium-mount Oct 2 19:41:50.998797 kubelet[1409]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fjbxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:41:50.998921 kubelet[1409]: E1002 19:41:50.998819 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:41:51.309876 kubelet[1409]: I1002 19:41:51.309198 1409 scope.go:115] "RemoveContainer" containerID="3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504" Oct 2 19:41:51.309876 kubelet[1409]: I1002 19:41:51.309471 1409 scope.go:115] "RemoveContainer" containerID="3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504" Oct 2 19:41:51.310561 env[1112]: time="2023-10-02T19:41:51.310505447Z" level=info msg="RemoveContainer for \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\"" Oct 2 19:41:51.310846 env[1112]: time="2023-10-02T19:41:51.310824717Z" level=info msg="RemoveContainer for \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\"" Oct 2 19:41:51.310919 env[1112]: time="2023-10-02T19:41:51.310885373Z" level=error msg="RemoveContainer for \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\" failed" error="failed to set removing state for container \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\": container is already in removing state" Oct 2 19:41:51.310984 kubelet[1409]: E1002 19:41:51.310972 1409 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\": container is already in removing state" containerID="3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504" Oct 2 19:41:51.311048 kubelet[1409]: E1002 19:41:51.310994 1409 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504": container is already in removing state; Skipping pod "cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9)" Oct 2 19:41:51.311048 kubelet[1409]: E1002 19:41:51.311035 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:51.311222 kubelet[1409]: E1002 19:41:51.311213 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9)\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:41:51.389622 env[1112]: time="2023-10-02T19:41:51.389569846Z" level=info msg="RemoveContainer for \"3378895ecf0bb5d74267bc6a98e87c2bb4407b9d247a4fcdd63d8391c5524504\" returns successfully" Oct 2 19:41:51.942881 kubelet[1409]: E1002 19:41:51.942819 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:52.943364 kubelet[1409]: E1002 19:41:52.943318 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:52.949936 kubelet[1409]: E1002 19:41:52.949906 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:41:53.943918 kubelet[1409]: E1002 19:41:53.943855 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:54.093668 kubelet[1409]: W1002 19:41:54.093631 1409 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ad2f08a_c72c_477d_9345_2a9238e54ab9.slice/cri-containerd-b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812.scope WatchSource:0}: task b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812 not found: not found Oct 2 19:41:54.895691 kubelet[1409]: E1002 19:41:54.895661 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:54.944265 kubelet[1409]: E1002 19:41:54.944220 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:55.945215 kubelet[1409]: E1002 19:41:55.945154 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:56.945761 kubelet[1409]: E1002 19:41:56.945710 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:57.946880 kubelet[1409]: E1002 19:41:57.946823 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:58.947836 kubelet[1409]: E1002 19:41:58.947790 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:41:59.897002 kubelet[1409]: E1002 19:41:59.896978 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:41:59.948506 kubelet[1409]: E1002 19:41:59.948436 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:00.949091 kubelet[1409]: E1002 19:42:00.949029 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:01.950039 kubelet[1409]: E1002 19:42:01.949986 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:02.950580 kubelet[1409]: E1002 19:42:02.950489 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:03.950794 kubelet[1409]: E1002 19:42:03.950750 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:04.898303 kubelet[1409]: E1002 19:42:04.898273 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:04.949935 kubelet[1409]: E1002 19:42:04.949884 1409 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:42:04.950230 kubelet[1409]: E1002 19:42:04.950208 1409 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-cqcsd_kube-system(5ad2f08a-c72c-477d-9345-2a9238e54ab9)\"" pod="kube-system/cilium-cqcsd" podUID=5ad2f08a-c72c-477d-9345-2a9238e54ab9 Oct 2 19:42:04.950826 kubelet[1409]: E1002 19:42:04.950803 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:05.951662 kubelet[1409]: E1002 19:42:05.951613 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:06.952216 kubelet[1409]: E1002 19:42:06.952147 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:07.952432 kubelet[1409]: E1002 19:42:07.952390 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:08.953000 kubelet[1409]: E1002 19:42:08.952938 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:09.796717 kubelet[1409]: E1002 19:42:09.796675 1409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:09.898617 kubelet[1409]: E1002 19:42:09.898597 1409 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:42:09.953533 kubelet[1409]: E1002 19:42:09.953511 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:10.953978 kubelet[1409]: E1002 19:42:10.953923 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:11.955022 kubelet[1409]: E1002 19:42:11.954994 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:12.043901 env[1112]: time="2023-10-02T19:42:12.043858606Z" level=info msg="StopPodSandbox for \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\"" Oct 2 19:42:12.044319 env[1112]: time="2023-10-02T19:42:12.043931986Z" level=info msg="Container to stop \"b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:42:12.045544 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54-shm.mount: Deactivated successfully. Oct 2 19:42:12.048729 systemd[1]: cri-containerd-a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54.scope: Deactivated successfully. Oct 2 19:42:12.047000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:42:12.049543 kernel: kauditd_printk_skb: 168 callbacks suppressed Oct 2 19:42:12.049605 kernel: audit: type=1334 audit(1696275732.047:719): prog-id=82 op=UNLOAD Oct 2 19:42:12.052254 env[1112]: time="2023-10-02T19:42:12.052219657Z" level=info msg="StopContainer for \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\" with timeout 30 (s)" Oct 2 19:42:12.052571 env[1112]: time="2023-10-02T19:42:12.052543244Z" level=info msg="Stop container \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\" with signal terminated" Oct 2 19:42:12.052000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:42:12.054175 kernel: audit: type=1334 audit(1696275732.052:720): prog-id=85 op=UNLOAD Oct 2 19:42:12.060200 systemd[1]: cri-containerd-fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5.scope: Deactivated successfully. Oct 2 19:42:12.059000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:42:12.062203 kernel: audit: type=1334 audit(1696275732.059:721): prog-id=86 op=UNLOAD Oct 2 19:42:12.063846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54-rootfs.mount: Deactivated successfully. Oct 2 19:42:12.064000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:42:12.066233 kernel: audit: type=1334 audit(1696275732.064:722): prog-id=89 op=UNLOAD Oct 2 19:42:12.073339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5-rootfs.mount: Deactivated successfully. Oct 2 19:42:12.075669 env[1112]: time="2023-10-02T19:42:12.075612857Z" level=info msg="shim disconnected" id=fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5 Oct 2 19:42:12.075669 env[1112]: time="2023-10-02T19:42:12.075654266Z" level=warning msg="cleaning up after shim disconnected" id=fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5 namespace=k8s.io Oct 2 19:42:12.075669 env[1112]: time="2023-10-02T19:42:12.075661900Z" level=info msg="cleaning up dead shim" Oct 2 19:42:12.075870 env[1112]: time="2023-10-02T19:42:12.075612777Z" level=info msg="shim disconnected" id=a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54 Oct 2 19:42:12.075870 env[1112]: time="2023-10-02T19:42:12.075865508Z" level=warning msg="cleaning up after shim disconnected" id=a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54 namespace=k8s.io Oct 2 19:42:12.075931 env[1112]: time="2023-10-02T19:42:12.075872292Z" level=info msg="cleaning up dead shim" Oct 2 19:42:12.082092 env[1112]: time="2023-10-02T19:42:12.082023219Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2281 runtime=io.containerd.runc.v2\n" Oct 2 19:42:12.082341 env[1112]: time="2023-10-02T19:42:12.082303062Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2282 runtime=io.containerd.runc.v2\n" Oct 2 19:42:12.082618 env[1112]: time="2023-10-02T19:42:12.082593055Z" level=info msg="TearDown network for sandbox \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" successfully" Oct 2 19:42:12.082687 env[1112]: time="2023-10-02T19:42:12.082620046Z" level=info msg="StopPodSandbox for \"a0644015201cb92c0b91b32ca2f5c607f1889edfd0fd132f13aa5a80d013cd54\" returns successfully" Oct 2 19:42:12.085341 env[1112]: time="2023-10-02T19:42:12.085305406Z" level=info msg="StopContainer for \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\" returns successfully" Oct 2 19:42:12.085585 env[1112]: time="2023-10-02T19:42:12.085563047Z" level=info msg="StopPodSandbox for \"281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827\"" Oct 2 19:42:12.085643 env[1112]: time="2023-10-02T19:42:12.085612551Z" level=info msg="Container to stop \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:42:12.086745 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827-shm.mount: Deactivated successfully. Oct 2 19:42:12.091471 systemd[1]: cri-containerd-281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827.scope: Deactivated successfully. Oct 2 19:42:12.090000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:42:12.093178 kernel: audit: type=1334 audit(1696275732.090:723): prog-id=78 op=UNLOAD Oct 2 19:42:12.094000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:42:12.096196 kernel: audit: type=1334 audit(1696275732.094:724): prog-id=81 op=UNLOAD Oct 2 19:42:12.107585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827-rootfs.mount: Deactivated successfully. Oct 2 19:42:12.112525 env[1112]: time="2023-10-02T19:42:12.112474816Z" level=info msg="shim disconnected" id=281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827 Oct 2 19:42:12.112651 env[1112]: time="2023-10-02T19:42:12.112530232Z" level=warning msg="cleaning up after shim disconnected" id=281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827 namespace=k8s.io Oct 2 19:42:12.112651 env[1112]: time="2023-10-02T19:42:12.112545300Z" level=info msg="cleaning up dead shim" Oct 2 19:42:12.117623 kubelet[1409]: I1002 19:42:12.117577 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.117623 kubelet[1409]: I1002 19:42:12.117599 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-run\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.117805 kubelet[1409]: I1002 19:42:12.117658 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ad2f08a-c72c-477d-9345-2a9238e54ab9-clustermesh-secrets\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.117805 kubelet[1409]: I1002 19:42:12.117693 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-lib-modules\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.117805 kubelet[1409]: I1002 19:42:12.117715 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-hostproc\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.117805 kubelet[1409]: I1002 19:42:12.117734 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-etc-cni-netd\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.117805 kubelet[1409]: I1002 19:42:12.117730 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.117805 kubelet[1409]: I1002 19:42:12.117754 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.118006 kubelet[1409]: I1002 19:42:12.117760 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-ipsec-secrets\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118006 kubelet[1409]: I1002 19:42:12.117782 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-cgroup\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118006 kubelet[1409]: I1002 19:42:12.117806 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-host-proc-sys-kernel\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118006 kubelet[1409]: I1002 19:42:12.117828 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-bpf-maps\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118006 kubelet[1409]: I1002 19:42:12.117849 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-xtables-lock\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118006 kubelet[1409]: I1002 19:42:12.117871 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-host-proc-sys-net\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118280 kubelet[1409]: I1002 19:42:12.117899 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-config-path\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118280 kubelet[1409]: I1002 19:42:12.117926 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjbxh\" (UniqueName: \"kubernetes.io/projected/5ad2f08a-c72c-477d-9345-2a9238e54ab9-kube-api-access-fjbxh\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118280 kubelet[1409]: I1002 19:42:12.117924 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.118280 kubelet[1409]: I1002 19:42:12.117942 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.118280 kubelet[1409]: I1002 19:42:12.117951 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ad2f08a-c72c-477d-9345-2a9238e54ab9-hubble-tls\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118459 kubelet[1409]: I1002 19:42:12.117954 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.118459 kubelet[1409]: I1002 19:42:12.117972 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cni-path\") pod \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\" (UID: \"5ad2f08a-c72c-477d-9345-2a9238e54ab9\") " Oct 2 19:42:12.118459 kubelet[1409]: I1002 19:42:12.118001 1409 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-bpf-maps\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.118459 kubelet[1409]: I1002 19:42:12.118014 1409 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-run\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.118459 kubelet[1409]: I1002 19:42:12.118028 1409 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-lib-modules\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.118459 kubelet[1409]: I1002 19:42:12.118047 1409 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-hostproc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.118459 kubelet[1409]: W1002 19:42:12.118030 1409 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/5ad2f08a-c72c-477d-9345-2a9238e54ab9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:42:12.118693 kubelet[1409]: I1002 19:42:12.118059 1409 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-cgroup\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.118693 kubelet[1409]: I1002 19:42:12.118071 1409 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-host-proc-sys-kernel\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.118693 kubelet[1409]: I1002 19:42:12.118092 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.118801 kubelet[1409]: I1002 19:42:12.118737 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.118801 kubelet[1409]: I1002 19:42:12.118763 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.119490 kubelet[1409]: I1002 19:42:12.119461 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad2f08a-c72c-477d-9345-2a9238e54ab9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:42:12.119716 kubelet[1409]: I1002 19:42:12.119690 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:42:12.119763 kubelet[1409]: I1002 19:42:12.119719 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:42:12.120243 env[1112]: time="2023-10-02T19:42:12.117578416Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:42:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2323 runtime=io.containerd.runc.v2\n" Oct 2 19:42:12.120462 env[1112]: time="2023-10-02T19:42:12.120439010Z" level=info msg="TearDown network for sandbox \"281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827\" successfully" Oct 2 19:42:12.120532 kubelet[1409]: I1002 19:42:12.120507 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad2f08a-c72c-477d-9345-2a9238e54ab9-kube-api-access-fjbxh" (OuterVolumeSpecName: "kube-api-access-fjbxh") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "kube-api-access-fjbxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:12.120619 env[1112]: time="2023-10-02T19:42:12.120517911Z" level=info msg="StopPodSandbox for \"281854a09b92137bd26cff1bafcf95e9daeec719ba232d38030c5d7929c04827\" returns successfully" Oct 2 19:42:12.121943 kubelet[1409]: I1002 19:42:12.121923 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad2f08a-c72c-477d-9345-2a9238e54ab9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:12.122053 kubelet[1409]: I1002 19:42:12.122030 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5ad2f08a-c72c-477d-9345-2a9238e54ab9" (UID: "5ad2f08a-c72c-477d-9345-2a9238e54ab9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:42:12.219240 kubelet[1409]: I1002 19:42:12.219079 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkw9l\" (UniqueName: \"kubernetes.io/projected/95512a34-41cc-46b7-b757-f341f392733a-kube-api-access-gkw9l\") pod \"95512a34-41cc-46b7-b757-f341f392733a\" (UID: \"95512a34-41cc-46b7-b757-f341f392733a\") " Oct 2 19:42:12.219240 kubelet[1409]: I1002 19:42:12.219216 1409 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95512a34-41cc-46b7-b757-f341f392733a-cilium-config-path\") pod \"95512a34-41cc-46b7-b757-f341f392733a\" (UID: \"95512a34-41cc-46b7-b757-f341f392733a\") " Oct 2 19:42:12.219352 kubelet[1409]: I1002 19:42:12.219247 1409 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-fjbxh\" (UniqueName: \"kubernetes.io/projected/5ad2f08a-c72c-477d-9345-2a9238e54ab9-kube-api-access-fjbxh\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.219352 kubelet[1409]: I1002 19:42:12.219260 1409 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ad2f08a-c72c-477d-9345-2a9238e54ab9-hubble-tls\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.219352 kubelet[1409]: I1002 19:42:12.219272 1409 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cni-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.219352 kubelet[1409]: I1002 19:42:12.219283 1409 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-xtables-lock\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.219352 kubelet[1409]: I1002 19:42:12.219293 1409 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-host-proc-sys-net\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.219352 kubelet[1409]: I1002 19:42:12.219304 1409 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.219352 kubelet[1409]: I1002 19:42:12.219315 1409 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ad2f08a-c72c-477d-9345-2a9238e54ab9-clustermesh-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.219352 kubelet[1409]: I1002 19:42:12.219329 1409 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ad2f08a-c72c-477d-9345-2a9238e54ab9-cilium-ipsec-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.219537 kubelet[1409]: I1002 19:42:12.219340 1409 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ad2f08a-c72c-477d-9345-2a9238e54ab9-etc-cni-netd\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.219537 kubelet[1409]: W1002 19:42:12.219404 1409 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/95512a34-41cc-46b7-b757-f341f392733a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:42:12.221118 kubelet[1409]: I1002 19:42:12.221086 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95512a34-41cc-46b7-b757-f341f392733a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "95512a34-41cc-46b7-b757-f341f392733a" (UID: "95512a34-41cc-46b7-b757-f341f392733a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:42:12.222021 kubelet[1409]: I1002 19:42:12.222000 1409 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95512a34-41cc-46b7-b757-f341f392733a-kube-api-access-gkw9l" (OuterVolumeSpecName: "kube-api-access-gkw9l") pod "95512a34-41cc-46b7-b757-f341f392733a" (UID: "95512a34-41cc-46b7-b757-f341f392733a"). InnerVolumeSpecName "kube-api-access-gkw9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:42:12.320472 kubelet[1409]: I1002 19:42:12.320428 1409 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-gkw9l\" (UniqueName: \"kubernetes.io/projected/95512a34-41cc-46b7-b757-f341f392733a-kube-api-access-gkw9l\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.320472 kubelet[1409]: I1002 19:42:12.320459 1409 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95512a34-41cc-46b7-b757-f341f392733a-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:42:12.339323 kubelet[1409]: I1002 19:42:12.339307 1409 scope.go:115] "RemoveContainer" containerID="b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812" Oct 2 19:42:12.342136 systemd[1]: Removed slice kubepods-burstable-pod5ad2f08a_c72c_477d_9345_2a9238e54ab9.slice. Oct 2 19:42:12.342842 env[1112]: time="2023-10-02T19:42:12.342810552Z" level=info msg="RemoveContainer for \"b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812\"" Oct 2 19:42:12.345282 env[1112]: time="2023-10-02T19:42:12.345263519Z" level=info msg="RemoveContainer for \"b386509eff92c2c6b4317509f4344b4e1e118556d8be0ebd7ce28450c47ca812\" returns successfully" Oct 2 19:42:12.345394 kubelet[1409]: I1002 19:42:12.345376 1409 scope.go:115] "RemoveContainer" containerID="fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5" Oct 2 19:42:12.345391 systemd[1]: Removed slice kubepods-besteffort-pod95512a34_41cc_46b7_b757_f341f392733a.slice. Oct 2 19:42:12.346269 env[1112]: time="2023-10-02T19:42:12.346250060Z" level=info msg="RemoveContainer for \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\"" Oct 2 19:42:12.348765 env[1112]: time="2023-10-02T19:42:12.348742552Z" level=info msg="RemoveContainer for \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\" returns successfully" Oct 2 19:42:12.348846 kubelet[1409]: I1002 19:42:12.348834 1409 scope.go:115] "RemoveContainer" containerID="fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5" Oct 2 19:42:12.349019 env[1112]: time="2023-10-02T19:42:12.348953504Z" level=error msg="ContainerStatus for \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\": not found" Oct 2 19:42:12.349130 kubelet[1409]: E1002 19:42:12.349115 1409 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\": not found" containerID="fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5" Oct 2 19:42:12.349190 kubelet[1409]: I1002 19:42:12.349146 1409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5} err="failed to get container status \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdf2d8950a4162f5017c7fa497327e574a4e856c57efa5714763261bd92111c5\": not found" Oct 2 19:42:12.955909 kubelet[1409]: E1002 19:42:12.955858 1409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:42:13.045060 systemd[1]: var-lib-kubelet-pods-5ad2f08a\x2dc72c\x2d477d\x2d9345\x2d2a9238e54ab9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjbxh.mount: Deactivated successfully. Oct 2 19:42:13.045158 systemd[1]: var-lib-kubelet-pods-95512a34\x2d41cc\x2d46b7\x2db757\x2df341f392733a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgkw9l.mount: Deactivated successfully. Oct 2 19:42:13.045225 systemd[1]: var-lib-kubelet-pods-5ad2f08a\x2dc72c\x2d477d\x2d9345\x2d2a9238e54ab9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:42:13.045272 systemd[1]: var-lib-kubelet-pods-5ad2f08a\x2dc72c\x2d477d\x2d9345\x2d2a9238e54ab9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:42:13.045316 systemd[1]: var-lib-kubelet-pods-5ad2f08a\x2dc72c\x2d477d\x2d9345\x2d2a9238e54ab9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.