Oct  2 19:43:26.174421 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023
Oct  2 19:43:26.174444 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1
Oct  2 19:43:26.174453 kernel: BIOS-provided physical RAM map:
Oct  2 19:43:26.174460 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  2 19:43:26.174466 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  2 19:43:26.174472 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  2 19:43:26.174482 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable
Oct  2 19:43:26.174488 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved
Oct  2 19:43:26.174494 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
Oct  2 19:43:26.174500 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  2 19:43:26.174506 kernel: NX (Execute Disable) protection: active
Oct  2 19:43:26.174512 kernel: SMBIOS 2.7 present.
Oct  2 19:43:26.174519 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017
Oct  2 19:43:26.174525 kernel: Hypervisor detected: KVM
Oct  2 19:43:26.174535 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  2 19:43:26.174542 kernel: kvm-clock: cpu 0, msr 7cf8a001, primary cpu clock
Oct  2 19:43:26.174549 kernel: kvm-clock: using sched offset of 6408082319 cycles
Oct  2 19:43:26.174556 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  2 19:43:26.174563 kernel: tsc: Detected 2499.994 MHz processor
Oct  2 19:43:26.174570 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct  2 19:43:26.174579 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct  2 19:43:26.174586 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000
Oct  2 19:43:26.174593 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  2 19:43:26.174601 kernel: Using GB pages for direct mapping
Oct  2 19:43:26.174724 kernel: ACPI: Early table checksum verification disabled
Oct  2 19:43:26.174752 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON)
Oct  2 19:43:26.174767 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
Oct  2 19:43:26.174780 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
Oct  2 19:43:26.174792 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
Oct  2 19:43:26.174808 kernel: ACPI: FACS 0x000000007D9EFF40 000040
Oct  2 19:43:26.174819 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Oct  2 19:43:26.174830 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Oct  2 19:43:26.174837 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
Oct  2 19:43:26.174844 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Oct  2 19:43:26.174851 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
Oct  2 19:43:26.174858 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001)
Oct  2 19:43:26.174865 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Oct  2 19:43:26.174878 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3]
Oct  2 19:43:26.174885 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488]
Oct  2 19:43:26.174892 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f]
Oct  2 19:43:26.174903 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39]
Oct  2 19:43:26.174911 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645]
Oct  2 19:43:26.174919 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf]
Oct  2 19:43:26.174926 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b]
Oct  2 19:43:26.174943 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7]
Oct  2 19:43:26.175033 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037]
Oct  2 19:43:26.175048 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba]
Oct  2 19:43:26.175062 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Oct  2 19:43:26.175076 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Oct  2 19:43:26.175090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
Oct  2 19:43:26.175104 kernel: NUMA: Initialized distance table, cnt=1
Oct  2 19:43:26.175118 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff]
Oct  2 19:43:26.175136 kernel: Zone ranges:
Oct  2 19:43:26.175150 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  2 19:43:26.175164 kernel:   DMA32    [mem 0x0000000001000000-0x000000007d9e9fff]
Oct  2 19:43:26.175178 kernel:   Normal   empty
Oct  2 19:43:26.175192 kernel: Movable zone start for each node
Oct  2 19:43:26.175206 kernel: Early memory node ranges
Oct  2 19:43:26.175220 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  2 19:43:26.175234 kernel:   node   0: [mem 0x0000000000100000-0x000000007d9e9fff]
Oct  2 19:43:26.175248 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff]
Oct  2 19:43:26.175418 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  2 19:43:26.175436 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  2 19:43:26.175450 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges
Oct  2 19:43:26.175464 kernel: ACPI: PM-Timer IO Port: 0xb008
Oct  2 19:43:26.175477 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  2 19:43:26.175492 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
Oct  2 19:43:26.175506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  2 19:43:26.175520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  2 19:43:26.175534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  2 19:43:26.175553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  2 19:43:26.175567 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  2 19:43:26.175582 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Oct  2 19:43:26.175596 kernel: TSC deadline timer available
Oct  2 19:43:26.175610 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Oct  2 19:43:26.175623 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices
Oct  2 19:43:26.175637 kernel: Booting paravirtualized kernel on KVM
Oct  2 19:43:26.175651 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  2 19:43:26.175665 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Oct  2 19:43:26.175683 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576
Oct  2 19:43:26.175697 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152
Oct  2 19:43:26.175711 kernel: pcpu-alloc: [0] 0 1 
Oct  2 19:43:26.175725 kernel: kvm-guest: stealtime: cpu 0, msr 7d61c0c0
Oct  2 19:43:26.175739 kernel: kvm-guest: PV spinlocks enabled
Oct  2 19:43:26.175753 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Oct  2 19:43:26.175767 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 506242
Oct  2 19:43:26.175780 kernel: Policy zone: DMA32
Oct  2 19:43:26.175797 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1
Oct  2 19:43:26.175815 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Oct  2 19:43:26.175828 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Oct  2 19:43:26.175842 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  2 19:43:26.175856 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  2 19:43:26.175870 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 121024K reserved, 0K cma-reserved)
Oct  2 19:43:26.175884 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Oct  2 19:43:26.175898 kernel: Kernel/User page tables isolation: enabled
Oct  2 19:43:26.175913 kernel: ftrace: allocating 34453 entries in 135 pages
Oct  2 19:43:26.175930 kernel: ftrace: allocated 135 pages with 4 groups
Oct  2 19:43:26.175943 kernel: rcu: Hierarchical RCU implementation.
Oct  2 19:43:26.175958 kernel: rcu:         RCU event tracing is enabled.
Oct  2 19:43:26.175973 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Oct  2 19:43:26.175987 kernel:         Rude variant of Tasks RCU enabled.
Oct  2 19:43:26.176001 kernel:         Tracing variant of Tasks RCU enabled.
Oct  2 19:43:26.176015 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  2 19:43:26.176029 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Oct  2 19:43:26.176043 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Oct  2 19:43:26.176060 kernel: random: crng init done
Oct  2 19:43:26.176074 kernel: Console: colour VGA+ 80x25
Oct  2 19:43:26.176088 kernel: printk: console [ttyS0] enabled
Oct  2 19:43:26.176102 kernel: ACPI: Core revision 20210730
Oct  2 19:43:26.176116 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
Oct  2 19:43:26.176130 kernel: APIC: Switch to symmetric I/O mode setup
Oct  2 19:43:26.176144 kernel: x2apic enabled
Oct  2 19:43:26.176158 kernel: Switched APIC routing to physical x2apic.
Oct  2 19:43:26.176172 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns
Oct  2 19:43:26.176190 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994)
Oct  2 19:43:26.176204 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Oct  2 19:43:26.176218 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Oct  2 19:43:26.176233 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  2 19:43:26.176314 kernel: Spectre V2 : Mitigation: Retpolines
Oct  2 19:43:26.176333 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Oct  2 19:43:26.176348 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Oct  2 19:43:26.176362 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Oct  2 19:43:26.176377 kernel: RETBleed: Vulnerable
Oct  2 19:43:26.176391 kernel: Speculative Store Bypass: Vulnerable
Oct  2 19:43:26.176405 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Oct  2 19:43:26.176420 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Oct  2 19:43:26.176435 kernel: GDS: Unknown: Dependent on hypervisor status
Oct  2 19:43:26.176449 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  2 19:43:26.176467 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  2 19:43:26.176482 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  2 19:43:26.176497 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Oct  2 19:43:26.176512 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Oct  2 19:43:26.176527 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Oct  2 19:43:26.176542 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Oct  2 19:43:26.176560 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Oct  2 19:43:26.176575 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Oct  2 19:43:26.176590 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  2 19:43:26.176604 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Oct  2 19:43:26.176618 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Oct  2 19:43:26.176631 kernel: x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
Oct  2 19:43:26.176646 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
Oct  2 19:43:26.176660 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
Oct  2 19:43:26.176675 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
Oct  2 19:43:26.176690 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
Oct  2 19:43:26.176705 kernel: Freeing SMP alternatives memory: 32K
Oct  2 19:43:26.176723 kernel: pid_max: default: 32768 minimum: 301
Oct  2 19:43:26.176738 kernel: LSM: Security Framework initializing
Oct  2 19:43:26.176752 kernel: SELinux:  Initializing.
Oct  2 19:43:26.176767 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Oct  2 19:43:26.176782 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Oct  2 19:43:26.176797 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7)
Oct  2 19:43:26.176812 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Oct  2 19:43:26.176828 kernel: signal: max sigframe size: 3632
Oct  2 19:43:26.176843 kernel: rcu: Hierarchical SRCU implementation.
Oct  2 19:43:26.176858 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Oct  2 19:43:26.176877 kernel: smp: Bringing up secondary CPUs ...
Oct  2 19:43:26.177313 kernel: x86: Booting SMP configuration:
Oct  2 19:43:26.177338 kernel: .... node  #0, CPUs:      #1
Oct  2 19:43:26.177353 kernel: kvm-clock: cpu 1, msr 7cf8a041, secondary cpu clock
Oct  2 19:43:26.177368 kernel: kvm-guest: stealtime: cpu 1, msr 7d71c0c0
Oct  2 19:43:26.177384 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Oct  2 19:43:26.177400 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Oct  2 19:43:26.177413 kernel: smp: Brought up 1 node, 2 CPUs
Oct  2 19:43:26.177428 kernel: smpboot: Max logical packages: 1
Oct  2 19:43:26.177448 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS)
Oct  2 19:43:26.177463 kernel: devtmpfs: initialized
Oct  2 19:43:26.177477 kernel: x86/mm: Memory block size: 128MB
Oct  2 19:43:26.177493 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  2 19:43:26.177508 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Oct  2 19:43:26.177523 kernel: pinctrl core: initialized pinctrl subsystem
Oct  2 19:43:26.177538 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  2 19:43:26.177553 kernel: audit: initializing netlink subsys (disabled)
Oct  2 19:43:26.177568 kernel: audit: type=2000 audit(1696275805.497:1): state=initialized audit_enabled=0 res=1
Oct  2 19:43:26.177585 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  2 19:43:26.177600 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  2 19:43:26.177614 kernel: cpuidle: using governor menu
Oct  2 19:43:26.177629 kernel: ACPI: bus type PCI registered
Oct  2 19:43:26.177643 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  2 19:43:26.177657 kernel: dca service started, version 1.12.1
Oct  2 19:43:26.177672 kernel: PCI: Using configuration type 1 for base access
Oct  2 19:43:26.177687 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  2 19:43:26.177702 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Oct  2 19:43:26.177719 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Oct  2 19:43:26.177733 kernel: ACPI: Added _OSI(Module Device)
Oct  2 19:43:26.177748 kernel: ACPI: Added _OSI(Processor Device)
Oct  2 19:43:26.177762 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  2 19:43:26.177777 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  2 19:43:26.177792 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Oct  2 19:43:26.177807 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Oct  2 19:43:26.177821 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Oct  2 19:43:26.177836 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Oct  2 19:43:26.177853 kernel: ACPI: Interpreter enabled
Oct  2 19:43:26.177867 kernel: ACPI: PM: (supports S0 S5)
Oct  2 19:43:26.177939 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  2 19:43:26.177959 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  2 19:43:26.178003 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Oct  2 19:43:26.178019 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  2 19:43:26.178547 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Oct  2 19:43:26.178684 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Oct  2 19:43:26.178708 kernel: acpiphp: Slot [3] registered
Oct  2 19:43:26.178724 kernel: acpiphp: Slot [4] registered
Oct  2 19:43:26.178739 kernel: acpiphp: Slot [5] registered
Oct  2 19:43:26.178754 kernel: acpiphp: Slot [6] registered
Oct  2 19:43:26.178769 kernel: acpiphp: Slot [7] registered
Oct  2 19:43:26.178784 kernel: acpiphp: Slot [8] registered
Oct  2 19:43:26.178798 kernel: acpiphp: Slot [9] registered
Oct  2 19:43:26.178812 kernel: acpiphp: Slot [10] registered
Oct  2 19:43:26.178827 kernel: acpiphp: Slot [11] registered
Oct  2 19:43:26.178844 kernel: acpiphp: Slot [12] registered
Oct  2 19:43:26.178858 kernel: acpiphp: Slot [13] registered
Oct  2 19:43:26.178873 kernel: acpiphp: Slot [14] registered
Oct  2 19:43:26.178887 kernel: acpiphp: Slot [15] registered
Oct  2 19:43:26.178901 kernel: acpiphp: Slot [16] registered
Oct  2 19:43:26.178916 kernel: acpiphp: Slot [17] registered
Oct  2 19:43:26.178930 kernel: acpiphp: Slot [18] registered
Oct  2 19:43:26.178945 kernel: acpiphp: Slot [19] registered
Oct  2 19:43:26.178959 kernel: acpiphp: Slot [20] registered
Oct  2 19:43:26.178977 kernel: acpiphp: Slot [21] registered
Oct  2 19:43:26.178990 kernel: acpiphp: Slot [22] registered
Oct  2 19:43:26.179057 kernel: acpiphp: Slot [23] registered
Oct  2 19:43:26.179072 kernel: acpiphp: Slot [24] registered
Oct  2 19:43:26.179086 kernel: acpiphp: Slot [25] registered
Oct  2 19:43:26.179100 kernel: acpiphp: Slot [26] registered
Oct  2 19:43:26.179115 kernel: acpiphp: Slot [27] registered
Oct  2 19:43:26.179129 kernel: acpiphp: Slot [28] registered
Oct  2 19:43:26.179143 kernel: acpiphp: Slot [29] registered
Oct  2 19:43:26.179158 kernel: acpiphp: Slot [30] registered
Oct  2 19:43:26.179175 kernel: acpiphp: Slot [31] registered
Oct  2 19:43:26.179190 kernel: PCI host bridge to bus 0000:00
Oct  2 19:43:26.181461 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  2 19:43:26.181641 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  2 19:43:26.181762 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  2 19:43:26.181926 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Oct  2 19:43:26.182050 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  2 19:43:26.182202 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Oct  2 19:43:26.182369 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Oct  2 19:43:26.182508 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
Oct  2 19:43:26.182636 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Oct  2 19:43:26.182765 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Oct  2 19:43:26.182892 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
Oct  2 19:43:26.183018 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
Oct  2 19:43:26.183197 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
Oct  2 19:43:26.186440 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
Oct  2 19:43:26.186586 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
Oct  2 19:43:26.186714 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
Oct  2 19:43:26.187039 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
Oct  2 19:43:26.187193 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
Oct  2 19:43:26.189567 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Oct  2 19:43:26.189773 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  2 19:43:26.189911 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Oct  2 19:43:26.190114 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
Oct  2 19:43:26.191385 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Oct  2 19:43:26.191979 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
Oct  2 19:43:26.192007 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  2 19:43:26.192028 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  2 19:43:26.195334 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  2 19:43:26.195363 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  2 19:43:26.195405 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  2 19:43:26.195422 kernel: iommu: Default domain type: Translated 
Oct  2 19:43:26.195504 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Oct  2 19:43:26.197160 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device
Oct  2 19:43:26.198443 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  2 19:43:26.198711 kernel: pci 0000:00:03.0: vgaarb: bridge control possible
Oct  2 19:43:26.198743 kernel: vgaarb: loaded
Oct  2 19:43:26.198758 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  2 19:43:26.198773 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  2 19:43:26.198788 kernel: PTP clock support registered
Oct  2 19:43:26.198803 kernel: PCI: Using ACPI for IRQ routing
Oct  2 19:43:26.199019 kernel: PCI: pci_cache_line_size set to 64 bytes
Oct  2 19:43:26.199035 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct  2 19:43:26.199076 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff]
Oct  2 19:43:26.199096 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Oct  2 19:43:26.199111 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter
Oct  2 19:43:26.199126 kernel: clocksource: Switched to clocksource kvm-clock
Oct  2 19:43:26.199141 kernel: VFS: Disk quotas dquot_6.6.0
Oct  2 19:43:26.199156 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  2 19:43:26.199171 kernel: pnp: PnP ACPI init
Oct  2 19:43:26.199186 kernel: pnp: PnP ACPI: found 5 devices
Oct  2 19:43:26.199201 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  2 19:43:26.199215 kernel: NET: Registered PF_INET protocol family
Oct  2 19:43:26.199233 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Oct  2 19:43:26.199247 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Oct  2 19:43:26.207322 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  2 19:43:26.207345 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  2 19:43:26.207358 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
Oct  2 19:43:26.207372 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Oct  2 19:43:26.207386 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Oct  2 19:43:26.207400 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Oct  2 19:43:26.207415 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  2 19:43:26.207438 kernel: NET: Registered PF_XDP protocol family
Oct  2 19:43:26.207928 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  2 19:43:26.208048 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  2 19:43:26.208152 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  2 19:43:26.208265 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Oct  2 19:43:26.208561 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  2 19:43:26.208752 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Oct  2 19:43:26.208779 kernel: PCI: CLS 0 bytes, default 64
Oct  2 19:43:26.208881 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Oct  2 19:43:26.208898 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns
Oct  2 19:43:26.208912 kernel: clocksource: Switched to clocksource tsc
Oct  2 19:43:26.208926 kernel: Initialise system trusted keyrings
Oct  2 19:43:26.208940 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Oct  2 19:43:26.208989 kernel: Key type asymmetric registered
Oct  2 19:43:26.209003 kernel: Asymmetric key parser 'x509' registered
Oct  2 19:43:26.209016 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Oct  2 19:43:26.209035 kernel: io scheduler mq-deadline registered
Oct  2 19:43:26.209049 kernel: io scheduler kyber registered
Oct  2 19:43:26.209062 kernel: io scheduler bfq registered
Oct  2 19:43:26.209077 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Oct  2 19:43:26.209091 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  2 19:43:26.209106 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  2 19:43:26.209120 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  2 19:43:26.209135 kernel: i8042: Warning: Keylock active
Oct  2 19:43:26.209149 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  2 19:43:26.209167 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  2 19:43:26.209335 kernel: rtc_cmos 00:00: RTC can wake from S4
Oct  2 19:43:26.209452 kernel: rtc_cmos 00:00: registered as rtc0
Oct  2 19:43:26.209563 kernel: rtc_cmos 00:00: setting system clock to 2023-10-02T19:43:25 UTC (1696275805)
Oct  2 19:43:26.209790 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Oct  2 19:43:26.209858 kernel: intel_pstate: CPU model not supported
Oct  2 19:43:26.209875 kernel: NET: Registered PF_INET6 protocol family
Oct  2 19:43:26.209890 kernel: Segment Routing with IPv6
Oct  2 19:43:26.210602 kernel: In-situ OAM (IOAM) with IPv6
Oct  2 19:43:26.210631 kernel: NET: Registered PF_PACKET protocol family
Oct  2 19:43:26.210733 kernel: Key type dns_resolver registered
Oct  2 19:43:26.210750 kernel: IPI shorthand broadcast: enabled
Oct  2 19:43:26.210870 kernel: sched_clock: Marking stable (436608518, 319942959)->(929386147, -172834670)
Oct  2 19:43:26.210887 kernel: registered taskstats version 1
Oct  2 19:43:26.210903 kernel: Loading compiled-in X.509 certificates
Oct  2 19:43:26.210918 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861'
Oct  2 19:43:26.210933 kernel: Key type .fscrypt registered
Oct  2 19:43:26.210953 kernel: Key type fscrypt-provisioning registered
Oct  2 19:43:26.210968 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  2 19:43:26.210983 kernel: ima: Allocated hash algorithm: sha1
Oct  2 19:43:26.210998 kernel: ima: No architecture policies found
Oct  2 19:43:26.211013 kernel: Freeing unused kernel image (initmem) memory: 45372K
Oct  2 19:43:26.211028 kernel: Write protecting the kernel read-only data: 28672k
Oct  2 19:43:26.211043 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Oct  2 19:43:26.211058 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K
Oct  2 19:43:26.211072 kernel: Run /init as init process
Oct  2 19:43:26.211090 kernel:   with arguments:
Oct  2 19:43:26.211105 kernel:     /init
Oct  2 19:43:26.211193 kernel:   with environment:
Oct  2 19:43:26.211210 kernel:     HOME=/
Oct  2 19:43:26.211225 kernel:     TERM=linux
Oct  2 19:43:26.211239 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Oct  2 19:43:26.219381 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 19:43:26.219426 systemd[1]: Detected virtualization amazon.
Oct  2 19:43:26.219587 systemd[1]: Detected architecture x86-64.
Oct  2 19:43:26.219608 systemd[1]: Running in initrd.
Oct  2 19:43:26.219624 systemd[1]: No hostname configured, using default hostname.
Oct  2 19:43:26.219640 systemd[1]: Hostname set to <localhost>.
Oct  2 19:43:26.219675 systemd[1]: Initializing machine ID from VM UUID.
Oct  2 19:43:26.219694 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Oct  2 19:43:26.219711 systemd[1]: Queued start job for default target initrd.target.
Oct  2 19:43:26.219727 systemd[1]: Started systemd-ask-password-console.path.
Oct  2 19:43:26.219740 systemd[1]: Reached target cryptsetup.target.
Oct  2 19:43:26.219755 systemd[1]: Reached target paths.target.
Oct  2 19:43:26.219811 systemd[1]: Reached target slices.target.
Oct  2 19:43:26.219828 systemd[1]: Reached target swap.target.
Oct  2 19:43:26.219843 systemd[1]: Reached target timers.target.
Oct  2 19:43:26.219864 systemd[1]: Listening on iscsid.socket.
Oct  2 19:43:26.219880 systemd[1]: Listening on iscsiuio.socket.
Oct  2 19:43:26.219897 systemd[1]: Listening on systemd-journald-audit.socket.
Oct  2 19:43:26.219913 systemd[1]: Listening on systemd-journald-dev-log.socket.
Oct  2 19:43:26.219930 systemd[1]: Listening on systemd-journald.socket.
Oct  2 19:43:26.219946 systemd[1]: Listening on systemd-networkd.socket.
Oct  2 19:43:26.219962 systemd[1]: Listening on systemd-udevd-control.socket.
Oct  2 19:43:26.220034 systemd[1]: Listening on systemd-udevd-kernel.socket.
Oct  2 19:43:26.220053 systemd[1]: Reached target sockets.target.
Oct  2 19:43:26.220074 systemd[1]: Starting kmod-static-nodes.service...
Oct  2 19:43:26.220090 systemd[1]: Finished network-cleanup.service.
Oct  2 19:43:26.220107 systemd[1]: Starting systemd-fsck-usr.service...
Oct  2 19:43:26.220123 systemd[1]: Starting systemd-journald.service...
Oct  2 19:43:26.220139 systemd[1]: Starting systemd-modules-load.service...
Oct  2 19:43:26.220155 systemd[1]: Starting systemd-resolved.service...
Oct  2 19:43:26.220171 systemd[1]: Starting systemd-vconsole-setup.service...
Oct  2 19:43:26.220190 systemd[1]: Finished kmod-static-nodes.service.
Oct  2 19:43:26.220207 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  2 19:43:26.220232 systemd-journald[184]: Journal started
Oct  2 19:43:26.220414 systemd-journald[184]: Runtime Journal (/run/log/journal/ec2fdc29d71769197ae6cfdf8484a51e) is 4.8M, max 38.7M, 33.9M free.
Oct  2 19:43:26.167113 systemd-modules-load[185]: Inserted module 'overlay'
Oct  2 19:43:26.403629 kernel: Bridge firewalling registered
Oct  2 19:43:26.403662 kernel: SCSI subsystem initialized
Oct  2 19:43:26.403678 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  2 19:43:26.403695 kernel: device-mapper: uevent: version 1.0.3
Oct  2 19:43:26.403714 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Oct  2 19:43:26.224633 systemd-modules-load[185]: Inserted module 'br_netfilter'
Oct  2 19:43:26.293905 systemd-modules-load[185]: Inserted module 'dm_multipath'
Oct  2 19:43:26.412749 systemd[1]: Started systemd-journald.service.
Oct  2 19:43:26.412786 kernel: audit: type=1130 audit(1696275806.403:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.295903 systemd-resolved[186]: Positive Trust Anchors:
Oct  2 19:43:26.423526 kernel: audit: type=1130 audit(1696275806.412:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.423561 kernel: audit: type=1130 audit(1696275806.418:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.295914 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Oct  2 19:43:26.295965 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Oct  2 19:43:26.442987 kernel: audit: type=1130 audit(1696275806.426:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.299770 systemd-resolved[186]: Defaulting to hostname 'linux'.
Oct  2 19:43:26.412941 systemd[1]: Started systemd-resolved.service.
Oct  2 19:43:26.463806 kernel: audit: type=1130 audit(1696275806.444:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.463842 kernel: audit: type=1130 audit(1696275806.447:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.423826 systemd[1]: Finished systemd-fsck-usr.service.
Oct  2 19:43:26.427784 systemd[1]: Finished systemd-modules-load.service.
Oct  2 19:43:26.445417 systemd[1]: Finished systemd-vconsole-setup.service.
Oct  2 19:43:26.448351 systemd[1]: Reached target nss-lookup.target.
Oct  2 19:43:26.465727 systemd[1]: Starting dracut-cmdline-ask.service...
Oct  2 19:43:26.468575 systemd[1]: Starting systemd-sysctl.service...
Oct  2 19:43:26.469756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Oct  2 19:43:26.493131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Oct  2 19:43:26.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.500276 kernel: audit: type=1130 audit(1696275806.493:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.502611 systemd[1]: Finished systemd-sysctl.service.
Oct  2 19:43:26.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.511337 kernel: audit: type=1130 audit(1696275806.502:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.518272 systemd[1]: Finished dracut-cmdline-ask.service.
Oct  2 19:43:26.536988 kernel: audit: type=1130 audit(1696275806.519:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.538569 systemd[1]: Starting dracut-cmdline.service...
Oct  2 19:43:26.556397 dracut-cmdline[206]: dracut-dracut-053
Oct  2 19:43:26.561193 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1
Oct  2 19:43:26.660331 kernel: Loading iSCSI transport class v2.0-870.
Oct  2 19:43:26.678385 kernel: iscsi: registered transport (tcp)
Oct  2 19:43:26.715572 kernel: iscsi: registered transport (qla4xxx)
Oct  2 19:43:26.715764 kernel: QLogic iSCSI HBA Driver
Oct  2 19:43:26.766456 systemd[1]: Finished dracut-cmdline.service.
Oct  2 19:43:26.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:26.769846 systemd[1]: Starting dracut-pre-udev.service...
Oct  2 19:43:26.825302 kernel: raid6: avx512x4 gen() 15284 MB/s
Oct  2 19:43:26.843292 kernel: raid6: avx512x4 xor()  7568 MB/s
Oct  2 19:43:26.861323 kernel: raid6: avx512x2 gen() 14200 MB/s
Oct  2 19:43:26.879305 kernel: raid6: avx512x2 xor() 16459 MB/s
Oct  2 19:43:26.897288 kernel: raid6: avx512x1 gen() 16801 MB/s
Oct  2 19:43:26.915290 kernel: raid6: avx512x1 xor() 17951 MB/s
Oct  2 19:43:26.933289 kernel: raid6: avx2x4   gen() 15855 MB/s
Oct  2 19:43:26.951351 kernel: raid6: avx2x4   xor()  6793 MB/s
Oct  2 19:43:26.968291 kernel: raid6: avx2x2   gen() 16574 MB/s
Oct  2 19:43:26.985289 kernel: raid6: avx2x2   xor() 15563 MB/s
Oct  2 19:43:27.003293 kernel: raid6: avx2x1   gen() 12011 MB/s
Oct  2 19:43:27.021581 kernel: raid6: avx2x1   xor() 13237 MB/s
Oct  2 19:43:27.039349 kernel: raid6: sse2x4   gen()  8006 MB/s
Oct  2 19:43:27.056330 kernel: raid6: sse2x4   xor()  5304 MB/s
Oct  2 19:43:27.074325 kernel: raid6: sse2x2   gen()  8746 MB/s
Oct  2 19:43:27.091311 kernel: raid6: sse2x2   xor()  4888 MB/s
Oct  2 19:43:27.109288 kernel: raid6: sse2x1   gen()  8586 MB/s
Oct  2 19:43:27.127414 kernel: raid6: sse2x1   xor()  4361 MB/s
Oct  2 19:43:27.127485 kernel: raid6: using algorithm avx512x1 gen() 16801 MB/s
Oct  2 19:43:27.127504 kernel: raid6: .... xor() 17951 MB/s, rmw enabled
Oct  2 19:43:27.128498 kernel: raid6: using avx512x2 recovery algorithm
Oct  2 19:43:27.144290 kernel: xor: automatically using best checksumming function   avx       
Oct  2 19:43:27.263286 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Oct  2 19:43:27.274297 systemd[1]: Finished dracut-pre-udev.service.
Oct  2 19:43:27.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:27.276000 audit: BPF prog-id=7 op=LOAD
Oct  2 19:43:27.276000 audit: BPF prog-id=8 op=LOAD
Oct  2 19:43:27.277529 systemd[1]: Starting systemd-udevd.service...
Oct  2 19:43:27.302990 systemd-udevd[384]: Using default interface naming scheme 'v252'.
Oct  2 19:43:27.311901 systemd[1]: Started systemd-udevd.service.
Oct  2 19:43:27.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:27.315505 systemd[1]: Starting dracut-pre-trigger.service...
Oct  2 19:43:27.350582 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation
Oct  2 19:43:27.403450 systemd[1]: Finished dracut-pre-trigger.service.
Oct  2 19:43:27.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:27.409347 systemd[1]: Starting systemd-udev-trigger.service...
Oct  2 19:43:27.479473 systemd[1]: Finished systemd-udev-trigger.service.
Oct  2 19:43:27.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:27.566765 kernel: ena 0000:00:05.0: ENA device version: 0.10
Oct  2 19:43:27.567173 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Oct  2 19:43:27.590637 kernel: cryptd: max_cpu_qlen set to 1000
Oct  2 19:43:27.611233 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
Oct  2 19:43:27.623411 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  2 19:43:27.623437 kernel: AES CTR mode by8 optimization enabled
Oct  2 19:43:27.623454 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:43:c6:e3:f0:03
Oct  2 19:43:27.625172 (udev-worker)[431]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 19:43:27.833846 kernel: nvme nvme0: pci function 0000:00:04.0
Oct  2 19:43:27.834478 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  2 19:43:27.834504 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Oct  2 19:43:27.834896 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Oct  2 19:43:27.834917 kernel: GPT:9289727 != 16777215
Oct  2 19:43:27.834935 kernel: GPT:Alternate GPT header not at the end of the disk.
Oct  2 19:43:27.834952 kernel: GPT:9289727 != 16777215
Oct  2 19:43:27.834968 kernel: GPT: Use GNU Parted to correct GPT errors.
Oct  2 19:43:27.835034 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Oct  2 19:43:27.835054 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (435)
Oct  2 19:43:27.751943 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Oct  2 19:43:27.853844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Oct  2 19:43:27.877126 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Oct  2 19:43:27.891231 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Oct  2 19:43:27.891382 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Oct  2 19:43:27.897885 systemd[1]: Starting disk-uuid.service...
Oct  2 19:43:27.906408 disk-uuid[578]: Primary Header is updated.
Oct  2 19:43:27.906408 disk-uuid[578]: Secondary Entries is updated.
Oct  2 19:43:27.906408 disk-uuid[578]: Secondary Header is updated.
Oct  2 19:43:27.912640 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Oct  2 19:43:27.917275 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Oct  2 19:43:28.923129 disk-uuid[579]: The operation has completed successfully.
Oct  2 19:43:28.929883 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Oct  2 19:43:29.095697 systemd[1]: disk-uuid.service: Deactivated successfully.
Oct  2 19:43:29.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:29.095865 systemd[1]: Finished disk-uuid.service.
Oct  2 19:43:29.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:29.105386 systemd[1]: Starting verity-setup.service...
Oct  2 19:43:29.136741 kernel: device-mapper: verity: sha256 using implementation "sha256-generic"
Oct  2 19:43:29.235069 systemd[1]: Found device dev-mapper-usr.device.
Oct  2 19:43:29.239672 systemd[1]: Mounting sysusr-usr.mount...
Oct  2 19:43:29.240052 systemd[1]: Finished verity-setup.service.
Oct  2 19:43:29.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:29.354561 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Oct  2 19:43:29.356432 systemd[1]: Mounted sysusr-usr.mount.
Oct  2 19:43:29.358580 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Oct  2 19:43:29.361629 systemd[1]: Starting ignition-setup.service...
Oct  2 19:43:29.364265 systemd[1]: Starting parse-ip-for-networkd.service...
Oct  2 19:43:29.389300 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Oct  2 19:43:29.389381 kernel: BTRFS info (device nvme0n1p6): using free space tree
Oct  2 19:43:29.389402 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Oct  2 19:43:29.420295 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Oct  2 19:43:29.439045 systemd[1]: mnt-oem.mount: Deactivated successfully.
Oct  2 19:43:29.463767 systemd[1]: Finished parse-ip-for-networkd.service.
Oct  2 19:43:29.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:29.466000 audit: BPF prog-id=9 op=LOAD
Oct  2 19:43:29.468047 systemd[1]: Starting systemd-networkd.service...
Oct  2 19:43:29.472960 systemd[1]: Finished ignition-setup.service.
Oct  2 19:43:29.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:29.476782 systemd[1]: Starting ignition-fetch-offline.service...
Oct  2 19:43:29.502880 systemd-networkd[1090]: lo: Link UP
Oct  2 19:43:29.502894 systemd-networkd[1090]: lo: Gained carrier
Oct  2 19:43:29.506921 systemd-networkd[1090]: Enumeration completed
Oct  2 19:43:29.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:29.507115 systemd[1]: Started systemd-networkd.service.
Oct  2 19:43:29.510729 systemd[1]: Reached target network.target.
Oct  2 19:43:29.512555 systemd-networkd[1090]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Oct  2 19:43:29.516185 systemd-networkd[1090]: eth0: Link UP
Oct  2 19:43:29.516192 systemd-networkd[1090]: eth0: Gained carrier
Oct  2 19:43:29.518166 systemd[1]: Starting iscsiuio.service...
Oct  2 19:43:29.530202 systemd[1]: Started iscsiuio.service.
Oct  2 19:43:29.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:29.531498 systemd-networkd[1090]: eth0: DHCPv4 address 172.31.22.191/20, gateway 172.31.16.1 acquired from 172.31.16.1
Oct  2 19:43:29.533659 systemd[1]: Starting iscsid.service...
Oct  2 19:43:29.540827 iscsid[1096]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Oct  2 19:43:29.540827 iscsid[1096]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Oct  2 19:43:29.540827 iscsid[1096]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Oct  2 19:43:29.540827 iscsid[1096]: If using hardware iscsi like qla4xxx this message can be ignored.
Oct  2 19:43:29.540827 iscsid[1096]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Oct  2 19:43:29.540827 iscsid[1096]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Oct  2 19:43:29.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:29.542740 systemd[1]: Started iscsid.service.
Oct  2 19:43:29.546650 systemd[1]: Starting dracut-initqueue.service...
Oct  2 19:43:29.569803 systemd[1]: Finished dracut-initqueue.service.
Oct  2 19:43:29.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:29.571087 systemd[1]: Reached target remote-fs-pre.target.
Oct  2 19:43:29.573385 systemd[1]: Reached target remote-cryptsetup.target.
Oct  2 19:43:29.574733 systemd[1]: Reached target remote-fs.target.
Oct  2 19:43:29.578730 systemd[1]: Starting dracut-pre-mount.service...
Oct  2 19:43:29.590711 systemd[1]: Finished dracut-pre-mount.service.
Oct  2 19:43:29.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.082426 ignition[1091]: Ignition 2.14.0
Oct  2 19:43:30.082439 ignition[1091]: Stage: fetch-offline
Oct  2 19:43:30.082590 ignition[1091]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Oct  2 19:43:30.082653 ignition[1091]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Oct  2 19:43:30.100318 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Oct  2 19:43:30.101018 ignition[1091]: Ignition finished successfully
Oct  2 19:43:30.103851 systemd[1]: Finished ignition-fetch-offline.service.
Oct  2 19:43:30.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.107165 systemd[1]: Starting ignition-fetch.service...
Oct  2 19:43:30.117646 ignition[1115]: Ignition 2.14.0
Oct  2 19:43:30.117660 ignition[1115]: Stage: fetch
Oct  2 19:43:30.117864 ignition[1115]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Oct  2 19:43:30.117898 ignition[1115]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Oct  2 19:43:30.126463 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Oct  2 19:43:30.128071 ignition[1115]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Oct  2 19:43:30.146205 ignition[1115]: INFO     : PUT result: OK
Oct  2 19:43:30.151047 ignition[1115]: DEBUG    : parsed url from cmdline: ""
Oct  2 19:43:30.151047 ignition[1115]: INFO     : no config URL provided
Oct  2 19:43:30.151047 ignition[1115]: INFO     : reading system config file "/usr/lib/ignition/user.ign"
Oct  2 19:43:30.151047 ignition[1115]: INFO     : no config at "/usr/lib/ignition/user.ign"
Oct  2 19:43:30.157372 ignition[1115]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Oct  2 19:43:30.157372 ignition[1115]: INFO     : PUT result: OK
Oct  2 19:43:30.160010 ignition[1115]: INFO     : GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Oct  2 19:43:30.162453 ignition[1115]: INFO     : GET result: OK
Oct  2 19:43:30.163403 ignition[1115]: DEBUG    : parsing config with SHA512: c51bd88382198f00ddc6fe0f5fcfe229321838776c6a7b32af83fe30fdd1a62c300a47dcec5293a2e77c6316042c0b9b612eb4a8b9bb787637a0fdcde2c9caf1
Oct  2 19:43:30.181381 unknown[1115]: fetched base config from "system"
Oct  2 19:43:30.181395 unknown[1115]: fetched base config from "system"
Oct  2 19:43:30.182081 ignition[1115]: fetch: fetch complete
Oct  2 19:43:30.181402 unknown[1115]: fetched user config from "aws"
Oct  2 19:43:30.182086 ignition[1115]: fetch: fetch passed
Oct  2 19:43:30.182144 ignition[1115]: Ignition finished successfully
Oct  2 19:43:30.195454 kernel: kauditd_printk_skb: 19 callbacks suppressed
Oct  2 19:43:30.195494 kernel: audit: type=1130 audit(1696275810.186:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.185419 systemd[1]: Finished ignition-fetch.service.
Oct  2 19:43:30.188586 systemd[1]: Starting ignition-kargs.service...
Oct  2 19:43:30.227784 ignition[1121]: Ignition 2.14.0
Oct  2 19:43:30.227797 ignition[1121]: Stage: kargs
Oct  2 19:43:30.228711 ignition[1121]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Oct  2 19:43:30.228895 ignition[1121]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Oct  2 19:43:30.247173 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Oct  2 19:43:30.248826 ignition[1121]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Oct  2 19:43:30.250590 ignition[1121]: INFO     : PUT result: OK
Oct  2 19:43:30.254057 ignition[1121]: kargs: kargs passed
Oct  2 19:43:30.254113 ignition[1121]: Ignition finished successfully
Oct  2 19:43:30.256521 systemd[1]: Finished ignition-kargs.service.
Oct  2 19:43:30.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.258607 systemd[1]: Starting ignition-disks.service...
Oct  2 19:43:30.264967 kernel: audit: type=1130 audit(1696275810.257:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.268151 ignition[1127]: Ignition 2.14.0
Oct  2 19:43:30.268277 ignition[1127]: Stage: disks
Oct  2 19:43:30.268444 ignition[1127]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Oct  2 19:43:30.268464 ignition[1127]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Oct  2 19:43:30.276519 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Oct  2 19:43:30.277877 ignition[1127]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Oct  2 19:43:30.279733 ignition[1127]: INFO     : PUT result: OK
Oct  2 19:43:30.282977 ignition[1127]: disks: disks passed
Oct  2 19:43:30.283156 ignition[1127]: Ignition finished successfully
Oct  2 19:43:30.285742 systemd[1]: Finished ignition-disks.service.
Oct  2 19:43:30.287803 systemd[1]: Reached target initrd-root-device.target.
Oct  2 19:43:30.290157 systemd[1]: Reached target local-fs-pre.target.
Oct  2 19:43:30.291943 systemd[1]: Reached target local-fs.target.
Oct  2 19:43:30.293683 systemd[1]: Reached target sysinit.target.
Oct  2 19:43:30.295332 systemd[1]: Reached target basic.target.
Oct  2 19:43:30.301632 kernel: audit: type=1130 audit(1696275810.287:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.300782 systemd[1]: Starting systemd-fsck-root.service...
Oct  2 19:43:30.340462 systemd-fsck[1135]: ROOT: clean, 603/553520 files, 56012/553472 blocks
Oct  2 19:43:30.345695 systemd[1]: Finished systemd-fsck-root.service.
Oct  2 19:43:30.355057 kernel: audit: type=1130 audit(1696275810.346:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.348911 systemd[1]: Mounting sysroot.mount...
Oct  2 19:43:30.372288 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Oct  2 19:43:30.375848 systemd[1]: Mounted sysroot.mount.
Oct  2 19:43:30.379908 systemd[1]: Reached target initrd-root-fs.target.
Oct  2 19:43:30.404032 systemd[1]: Mounting sysroot-usr.mount...
Oct  2 19:43:30.408633 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Oct  2 19:43:30.408725 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Oct  2 19:43:30.408769 systemd[1]: Reached target ignition-diskful.target.
Oct  2 19:43:30.422902 systemd[1]: Mounted sysroot-usr.mount.
Oct  2 19:43:30.434175 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Oct  2 19:43:30.435322 systemd[1]: Starting initrd-setup-root.service...
Oct  2 19:43:30.452352 initrd-setup-root[1157]: cut: /sysroot/etc/passwd: No such file or directory
Oct  2 19:43:30.457283 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1152)
Oct  2 19:43:30.467075 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Oct  2 19:43:30.467137 kernel: BTRFS info (device nvme0n1p6): using free space tree
Oct  2 19:43:30.467155 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Oct  2 19:43:30.476284 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Oct  2 19:43:30.477282 initrd-setup-root[1183]: cut: /sysroot/etc/group: No such file or directory
Oct  2 19:43:30.488201 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Oct  2 19:43:30.492528 initrd-setup-root[1191]: cut: /sysroot/etc/shadow: No such file or directory
Oct  2 19:43:30.499654 initrd-setup-root[1199]: cut: /sysroot/etc/gshadow: No such file or directory
Oct  2 19:43:30.696888 systemd[1]: Finished initrd-setup-root.service.
Oct  2 19:43:30.711296 kernel: audit: type=1130 audit(1696275810.699:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.701798 systemd[1]: Starting ignition-mount.service...
Oct  2 19:43:30.713743 systemd[1]: Starting sysroot-boot.service...
Oct  2 19:43:30.722148 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Oct  2 19:43:30.722284 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Oct  2 19:43:30.738110 ignition[1217]: INFO     : Ignition 2.14.0
Oct  2 19:43:30.739525 ignition[1217]: INFO     : Stage: mount
Oct  2 19:43:30.739525 ignition[1217]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Oct  2 19:43:30.739525 ignition[1217]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Oct  2 19:43:30.748854 ignition[1217]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Oct  2 19:43:30.751350 ignition[1217]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Oct  2 19:43:30.753568 ignition[1217]: INFO     : PUT result: OK
Oct  2 19:43:30.757761 ignition[1217]: INFO     : mount: mount passed
Oct  2 19:43:30.759098 ignition[1217]: INFO     : Ignition finished successfully
Oct  2 19:43:30.759008 systemd[1]: Finished ignition-mount.service.
Oct  2 19:43:30.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.763733 systemd[1]: Starting ignition-files.service...
Oct  2 19:43:30.770950 kernel: audit: type=1130 audit(1696275810.762:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.774950 systemd[1]: Finished sysroot-boot.service.
Oct  2 19:43:30.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.782278 kernel: audit: type=1130 audit(1696275810.776:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:30.782300 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Oct  2 19:43:30.796319 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1228)
Oct  2 19:43:30.799889 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Oct  2 19:43:30.799953 kernel: BTRFS info (device nvme0n1p6): using free space tree
Oct  2 19:43:30.799971 kernel: BTRFS info (device nvme0n1p6): has skinny extents
Oct  2 19:43:30.806308 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Oct  2 19:43:30.809382 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Oct  2 19:43:30.822166 ignition[1247]: INFO     : Ignition 2.14.0
Oct  2 19:43:30.822166 ignition[1247]: INFO     : Stage: files
Oct  2 19:43:30.824770 ignition[1247]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Oct  2 19:43:30.824770 ignition[1247]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Oct  2 19:43:30.833782 ignition[1247]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Oct  2 19:43:30.835477 ignition[1247]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Oct  2 19:43:30.837626 ignition[1247]: INFO     : PUT result: OK
Oct  2 19:43:30.846437 ignition[1247]: DEBUG    : files: compiled without relabeling support, skipping
Oct  2 19:43:30.853081 ignition[1247]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Oct  2 19:43:30.853081 ignition[1247]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Oct  2 19:43:30.869409 ignition[1247]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Oct  2 19:43:30.875317 ignition[1247]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Oct  2 19:43:30.880999 unknown[1247]: wrote ssh authorized keys file for user: core
Oct  2 19:43:30.883950 ignition[1247]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Oct  2 19:43:30.883950 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz"
Oct  2 19:43:30.883950 ignition[1247]: INFO     : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1
Oct  2 19:43:31.041266 ignition[1247]: INFO     : GET result: OK
Oct  2 19:43:31.112499 systemd-networkd[1090]: eth0: Gained IPv6LL
Oct  2 19:43:31.288015 ignition[1247]: DEBUG    : file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d
Oct  2 19:43:31.291558 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz"
Oct  2 19:43:31.291558 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz"
Oct  2 19:43:31.291558 ignition[1247]: INFO     : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1
Oct  2 19:43:31.374984 ignition[1247]: INFO     : GET result: OK
Oct  2 19:43:31.458617 ignition[1247]: DEBUG    : file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df
Oct  2 19:43:31.464909 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz"
Oct  2 19:43:31.464909 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/etc/eks/bootstrap.sh"
Oct  2 19:43:31.482997 ignition[1247]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Oct  2 19:43:31.494736 ignition[1247]: INFO     : op(1): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3088566267"
Oct  2 19:43:31.499869 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1250)
Oct  2 19:43:31.499896 ignition[1247]: CRITICAL : op(1): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem3088566267": device or resource busy
Oct  2 19:43:31.499896 ignition[1247]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3088566267", trying btrfs: device or resource busy
Oct  2 19:43:31.499896 ignition[1247]: INFO     : op(2): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3088566267"
Oct  2 19:43:31.510962 ignition[1247]: INFO     : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3088566267"
Oct  2 19:43:31.515953 ignition[1247]: INFO     : op(3): [started]  unmounting "/mnt/oem3088566267"
Oct  2 19:43:31.518503 ignition[1247]: INFO     : op(3): [finished] unmounting "/mnt/oem3088566267"
Oct  2 19:43:31.518503 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh"
Oct  2 19:43:31.518503 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/bin/kubeadm"
Oct  2 19:43:31.518008 systemd[1]: mnt-oem3088566267.mount: Deactivated successfully.
Oct  2 19:43:31.531734 ignition[1247]: INFO     : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1
Oct  2 19:43:31.612675 ignition[1247]: INFO     : GET result: OK
Oct  2 19:43:32.846887 ignition[1247]: DEBUG    : file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5
Oct  2 19:43:32.850770 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm"
Oct  2 19:43:32.850770 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/bin/kubelet"
Oct  2 19:43:32.850770 ignition[1247]: INFO     : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1
Oct  2 19:43:32.900542 ignition[1247]: INFO     : GET result: OK
Oct  2 19:43:34.627908 ignition[1247]: DEBUG    : file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54
Oct  2 19:43:34.633635 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet"
Oct  2 19:43:34.633635 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/install.sh"
Oct  2 19:43:34.633635 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh"
Oct  2 19:43:34.633635 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/docker/daemon.json"
Oct  2 19:43:34.633635 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json"
Oct  2 19:43:34.633635 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json"
Oct  2 19:43:34.633635 ignition[1247]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Oct  2 19:43:34.659966 ignition[1247]: INFO     : op(4): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1175795133"
Oct  2 19:43:34.659966 ignition[1247]: CRITICAL : op(4): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem1175795133": device or resource busy
Oct  2 19:43:34.659966 ignition[1247]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1175795133", trying btrfs: device or resource busy
Oct  2 19:43:34.659966 ignition[1247]: INFO     : op(5): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1175795133"
Oct  2 19:43:34.682980 ignition[1247]: INFO     : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1175795133"
Oct  2 19:43:34.682980 ignition[1247]: INFO     : op(6): [started]  unmounting "/mnt/oem1175795133"
Oct  2 19:43:34.682980 ignition[1247]: INFO     : op(6): [finished] unmounting "/mnt/oem1175795133"
Oct  2 19:43:34.682980 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json"
Oct  2 19:43:34.682980 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/etc/amazon/ssm/seelog.xml"
Oct  2 19:43:34.682980 ignition[1247]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Oct  2 19:43:34.679624 systemd[1]: mnt-oem1175795133.mount: Deactivated successfully.
Oct  2 19:43:34.710420 ignition[1247]: INFO     : op(7): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3888303120"
Oct  2 19:43:34.712809 ignition[1247]: CRITICAL : op(7): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem3888303120": device or resource busy
Oct  2 19:43:34.712809 ignition[1247]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3888303120", trying btrfs: device or resource busy
Oct  2 19:43:34.712809 ignition[1247]: INFO     : op(8): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3888303120"
Oct  2 19:43:34.721394 ignition[1247]: INFO     : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3888303120"
Oct  2 19:43:34.724082 ignition[1247]: INFO     : op(9): [started]  unmounting "/mnt/oem3888303120"
Oct  2 19:43:34.728373 ignition[1247]: INFO     : op(9): [finished] unmounting "/mnt/oem3888303120"
Oct  2 19:43:34.730443 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml"
Oct  2 19:43:34.730919 systemd[1]: mnt-oem3888303120.mount: Deactivated successfully.
Oct  2 19:43:34.737237 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/etc/systemd/system/nvidia.service"
Oct  2 19:43:34.741792 ignition[1247]: INFO     : oem config not found in "/usr/share/oem", looking on oem partition
Oct  2 19:43:34.751571 ignition[1247]: INFO     : op(a): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1211429633"
Oct  2 19:43:34.757624 ignition[1247]: CRITICAL : op(a): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem1211429633": device or resource busy
Oct  2 19:43:34.757624 ignition[1247]: ERROR    : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1211429633", trying btrfs: device or resource busy
Oct  2 19:43:34.757624 ignition[1247]: INFO     : op(b): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1211429633"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1211429633"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : op(c): [started]  unmounting "/mnt/oem1211429633"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : op(c): [finished] unmounting "/mnt/oem1211429633"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(d): [started]  processing unit "nvidia.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(d): [finished] processing unit "nvidia.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(e): [started]  processing unit "coreos-metadata-sshkeys@.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(f): [started]  processing unit "amazon-ssm-agent.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(f): op(10): [started]  writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(f): op(10): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(f): [finished] processing unit "amazon-ssm-agent.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(11): [started]  processing unit "prepare-cni-plugins.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(11): op(12): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(11): [finished] processing unit "prepare-cni-plugins.service"
Oct  2 19:43:34.757624 ignition[1247]: INFO     : files: op(13): [started]  processing unit "prepare-critools.service"
Oct  2 19:43:34.824101 kernel: audit: type=1130 audit(1696275814.807:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:34.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(13): op(14): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(13): [finished] processing unit "prepare-critools.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(15): [started]  setting preset to enabled for "amazon-ssm-agent.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(15): [finished] setting preset to enabled for "amazon-ssm-agent.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(16): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(17): [started]  setting preset to enabled for "prepare-critools.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(17): [finished] setting preset to enabled for "prepare-critools.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(18): [started]  setting preset to enabled for "nvidia.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(18): [finished] setting preset to enabled for "nvidia.service"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(19): [started]  setting preset to enabled for "coreos-metadata-sshkeys@.service "
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service "
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: createResultFile: createFiles: op(1a): [started]  writing file "/sysroot/etc/.ignition-result.json"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json"
Oct  2 19:43:34.824205 ignition[1247]: INFO     : files: files passed
Oct  2 19:43:34.824205 ignition[1247]: INFO     : Ignition finished successfully
Oct  2 19:43:34.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:34.800128 systemd[1]: Finished ignition-files.service.
Oct  2 19:43:34.894697 kernel: audit: type=1130 audit(1696275814.869:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:34.894738 kernel: audit: type=1130 audit(1696275814.878:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:34.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:34.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:34.824456 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Oct  2 19:43:34.836736 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Oct  2 19:43:34.914196 initrd-setup-root-after-ignition[1271]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Oct  2 19:43:34.839861 systemd[1]: Starting ignition-quench.service...
Oct  2 19:43:34.859561 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Oct  2 19:43:34.871182 systemd[1]: ignition-quench.service: Deactivated successfully.
Oct  2 19:43:34.871349 systemd[1]: Finished ignition-quench.service.
Oct  2 19:43:34.894841 systemd[1]: Reached target ignition-complete.target.
Oct  2 19:43:34.907070 systemd[1]: Starting initrd-parse-etc.service...
Oct  2 19:43:34.942497 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  2 19:43:34.942634 systemd[1]: Finished initrd-parse-etc.service.
Oct  2 19:43:34.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:34.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:34.950870 systemd[1]: Reached target initrd-fs.target.
Oct  2 19:43:34.951301 systemd[1]: Reached target initrd.target.
Oct  2 19:43:34.951475 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Oct  2 19:43:34.953337 systemd[1]: Starting dracut-pre-pivot.service...
Oct  2 19:43:34.984601 systemd[1]: Finished dracut-pre-pivot.service.
Oct  2 19:43:34.987599 systemd[1]: Starting initrd-cleanup.service...
Oct  2 19:43:34.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.002440 systemd[1]: Stopped target nss-lookup.target.
Oct  2 19:43:35.005089 systemd[1]: Stopped target remote-cryptsetup.target.
Oct  2 19:43:35.007968 systemd[1]: Stopped target timers.target.
Oct  2 19:43:35.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.009333 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  2 19:43:35.009958 systemd[1]: Stopped dracut-pre-pivot.service.
Oct  2 19:43:35.011640 systemd[1]: Stopped target initrd.target.
Oct  2 19:43:35.013194 systemd[1]: Stopped target basic.target.
Oct  2 19:43:35.015715 systemd[1]: Stopped target ignition-complete.target.
Oct  2 19:43:35.017279 systemd[1]: Stopped target ignition-diskful.target.
Oct  2 19:43:35.020114 systemd[1]: Stopped target initrd-root-device.target.
Oct  2 19:43:35.020264 systemd[1]: Stopped target remote-fs.target.
Oct  2 19:43:35.024805 systemd[1]: Stopped target remote-fs-pre.target.
Oct  2 19:43:35.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.051988 iscsid[1096]: iscsid shutting down.
Oct  2 19:43:35.027630 systemd[1]: Stopped target sysinit.target.
Oct  2 19:43:35.030574 systemd[1]: Stopped target local-fs.target.
Oct  2 19:43:35.032206 systemd[1]: Stopped target local-fs-pre.target.
Oct  2 19:43:35.035113 systemd[1]: Stopped target swap.target.
Oct  2 19:43:35.038234 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  2 19:43:35.038365 systemd[1]: Stopped dracut-pre-mount.service.
Oct  2 19:43:35.040681 systemd[1]: Stopped target cryptsetup.target.
Oct  2 19:43:35.040998 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  2 19:43:35.041172 systemd[1]: Stopped dracut-initqueue.service.
Oct  2 19:43:35.041757 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Oct  2 19:43:35.041850 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Oct  2 19:43:35.046126 systemd[1]: ignition-files.service: Deactivated successfully.
Oct  2 19:43:35.046252 systemd[1]: Stopped ignition-files.service.
Oct  2 19:43:35.047534 systemd[1]: Stopping ignition-mount.service...
Oct  2 19:43:35.057805 systemd[1]: Stopping iscsid.service...
Oct  2 19:43:35.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.083009 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  2 19:43:35.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.083240 systemd[1]: Stopped kmod-static-nodes.service.
Oct  2 19:43:35.086309 systemd[1]: Stopping sysroot-boot.service...
Oct  2 19:43:35.089607 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  2 19:43:35.111586 ignition[1285]: INFO     : Ignition 2.14.0
Oct  2 19:43:35.111586 ignition[1285]: INFO     : Stage: umount
Oct  2 19:43:35.111586 ignition[1285]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Oct  2 19:43:35.111586 ignition[1285]: DEBUG    : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b
Oct  2 19:43:35.089852 systemd[1]: Stopped systemd-udev-trigger.service.
Oct  2 19:43:35.091847 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  2 19:43:35.092008 systemd[1]: Stopped dracut-pre-trigger.service.
Oct  2 19:43:35.096211 systemd[1]: iscsid.service: Deactivated successfully.
Oct  2 19:43:35.096354 systemd[1]: Stopped iscsid.service.
Oct  2 19:43:35.098788 systemd[1]: Stopping iscsiuio.service...
Oct  2 19:43:35.101176 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  2 19:43:35.101304 systemd[1]: Finished initrd-cleanup.service.
Oct  2 19:43:35.106231 systemd[1]: iscsiuio.service: Deactivated successfully.
Oct  2 19:43:35.106375 systemd[1]: Stopped iscsiuio.service.
Oct  2 19:43:35.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.138846 ignition[1285]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Oct  2 19:43:35.140883 ignition[1285]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Oct  2 19:43:35.147053 ignition[1285]: INFO     : PUT result: OK
Oct  2 19:43:35.153561 ignition[1285]: INFO     : umount: umount passed
Oct  2 19:43:35.154842 ignition[1285]: INFO     : Ignition finished successfully
Oct  2 19:43:35.155733 systemd[1]: ignition-mount.service: Deactivated successfully.
Oct  2 19:43:35.155862 systemd[1]: Stopped ignition-mount.service.
Oct  2 19:43:35.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.160929 systemd[1]: ignition-disks.service: Deactivated successfully.
Oct  2 19:43:35.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.160989 systemd[1]: Stopped ignition-disks.service.
Oct  2 19:43:35.162582 systemd[1]: ignition-kargs.service: Deactivated successfully.
Oct  2 19:43:35.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.162634 systemd[1]: Stopped ignition-kargs.service.
Oct  2 19:43:35.169037 systemd[1]: ignition-fetch.service: Deactivated successfully.
Oct  2 19:43:35.169115 systemd[1]: Stopped ignition-fetch.service.
Oct  2 19:43:35.174185 systemd[1]: Stopped target network.target.
Oct  2 19:43:35.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.180953 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Oct  2 19:43:35.181037 systemd[1]: Stopped ignition-fetch-offline.service.
Oct  2 19:43:35.186227 systemd[1]: Stopped target paths.target.
Oct  2 19:43:35.187309 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  2 19:43:35.193550 systemd[1]: Stopped systemd-ask-password-console.path.
Oct  2 19:43:35.194831 systemd[1]: Stopped target slices.target.
Oct  2 19:43:35.198281 systemd[1]: Stopped target sockets.target.
Oct  2 19:43:35.200407 systemd[1]: iscsid.socket: Deactivated successfully.
Oct  2 19:43:35.200465 systemd[1]: Closed iscsid.socket.
Oct  2 19:43:35.218858 kernel: kauditd_printk_skb: 21 callbacks suppressed
Oct  2 19:43:35.218907 kernel: audit: type=1131 audit(1696275815.204:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.202504 systemd[1]: iscsiuio.socket: Deactivated successfully.
Oct  2 19:43:35.232621 kernel: audit: type=1131 audit(1696275815.219:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.232648 kernel: audit: type=1131 audit(1696275815.225:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.202545 systemd[1]: Closed iscsiuio.socket.
Oct  2 19:43:35.203647 systemd[1]: ignition-setup.service: Deactivated successfully.
Oct  2 19:43:35.252586 kernel: audit: type=1131 audit(1696275815.233:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.252653 kernel: audit: type=1131 audit(1696275815.242:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.252667 kernel: audit: type=1334 audit(1696275815.243:66): prog-id=6 op=UNLOAD
Oct  2 19:43:35.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.243000 audit: BPF prog-id=6 op=UNLOAD
Oct  2 19:43:35.203701 systemd[1]: Stopped ignition-setup.service.
Oct  2 19:43:35.204884 systemd[1]: Stopping systemd-networkd.service...
Oct  2 19:43:35.210646 systemd[1]: Stopping systemd-resolved.service...
Oct  2 19:43:35.268024 kernel: audit: type=1131 audit(1696275815.255:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.268054 kernel: audit: type=1131 audit(1696275815.255:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.215937 systemd[1]: sysroot-boot.service: Deactivated successfully.
Oct  2 19:43:35.216055 systemd[1]: Stopped sysroot-boot.service.
Oct  2 19:43:35.285045 kernel: audit: type=1131 audit(1696275815.267:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.219732 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Oct  2 19:43:35.219832 systemd[1]: Stopped initrd-setup-root.service.
Oct  2 19:43:35.220311 systemd-networkd[1090]: eth0: DHCPv6 lease lost
Oct  2 19:43:35.289000 audit: BPF prog-id=9 op=UNLOAD
Oct  2 19:43:35.232613 systemd[1]: systemd-resolved.service: Deactivated successfully.
Oct  2 19:43:35.293297 kernel: audit: type=1334 audit(1696275815.289:70): prog-id=9 op=UNLOAD
Oct  2 19:43:35.232721 systemd[1]: Stopped systemd-resolved.service.
Oct  2 19:43:35.234678 systemd[1]: systemd-networkd.service: Deactivated successfully.
Oct  2 19:43:35.234989 systemd[1]: Stopped systemd-networkd.service.
Oct  2 19:43:35.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.243061 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Oct  2 19:43:35.243146 systemd[1]: Closed systemd-networkd.socket.
Oct  2 19:43:35.253582 systemd[1]: Stopping network-cleanup.service...
Oct  2 19:43:35.255948 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Oct  2 19:43:35.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.256030 systemd[1]: Stopped parse-ip-for-networkd.service.
Oct  2 19:43:35.256217 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  2 19:43:35.256273 systemd[1]: Stopped systemd-sysctl.service.
Oct  2 19:43:35.268080 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  2 19:43:35.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.268147 systemd[1]: Stopped systemd-modules-load.service.
Oct  2 19:43:35.268573 systemd[1]: Stopping systemd-udevd.service...
Oct  2 19:43:35.287201 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  2 19:43:35.287555 systemd[1]: Stopped systemd-udevd.service.
Oct  2 19:43:35.296017 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  2 19:43:35.296079 systemd[1]: Closed systemd-udevd-control.socket.
Oct  2 19:43:35.298896 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  2 19:43:35.298942 systemd[1]: Closed systemd-udevd-kernel.socket.
Oct  2 19:43:35.300070 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  2 19:43:35.300132 systemd[1]: Stopped dracut-pre-udev.service.
Oct  2 19:43:35.305859 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  2 19:43:35.305931 systemd[1]: Stopped dracut-cmdline.service.
Oct  2 19:43:35.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.310298 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Oct  2 19:43:35.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.312733 systemd[1]: Stopped dracut-cmdline-ask.service.
Oct  2 19:43:35.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:35.324118 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Oct  2 19:43:35.353056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  2 19:43:35.353179 systemd[1]: Stopped systemd-vconsole-setup.service.
Oct  2 19:43:35.356511 systemd[1]: network-cleanup.service: Deactivated successfully.
Oct  2 19:43:35.356605 systemd[1]: Stopped network-cleanup.service.
Oct  2 19:43:35.359612 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  2 19:43:35.359753 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Oct  2 19:43:35.362368 systemd[1]: Reached target initrd-switch-root.target.
Oct  2 19:43:35.363772 systemd[1]: Starting initrd-switch-root.service...
Oct  2 19:43:35.382661 systemd[1]: Switching root.
Oct  2 19:43:35.401746 systemd-journald[184]: Journal stopped
Oct  2 19:43:41.366312 systemd-journald[184]: Received SIGTERM from PID 1 (systemd).
Oct  2 19:43:41.366388 kernel: SELinux:  Class mctp_socket not defined in policy.
Oct  2 19:43:41.366413 kernel: SELinux:  Class anon_inode not defined in policy.
Oct  2 19:43:41.366441 kernel: SELinux: the above unknown classes and permissions will be allowed
Oct  2 19:43:41.366463 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 19:43:41.366483 kernel: SELinux:  policy capability open_perms=1
Oct  2 19:43:41.366604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 19:43:41.366629 kernel: SELinux:  policy capability always_check_network=0
Oct  2 19:43:41.366649 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 19:43:41.366673 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 19:43:41.366693 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Oct  2 19:43:41.366721 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Oct  2 19:43:41.366746 systemd[1]: Successfully loaded SELinux policy in 114.299ms.
Oct  2 19:43:41.366785 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.105ms.
Oct  2 19:43:41.366816 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 19:43:41.366839 systemd[1]: Detected virtualization amazon.
Oct  2 19:43:41.366863 systemd[1]: Detected architecture x86-64.
Oct  2 19:43:41.366885 systemd[1]: Detected first boot.
Oct  2 19:43:41.366908 systemd[1]: Initializing machine ID from VM UUID.
Oct  2 19:43:41.366931 systemd[1]: Populated /etc with preset unit settings.
Oct  2 19:43:41.366954 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Oct  2 19:43:41.366989 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct  2 19:43:41.367014 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Oct  2 19:43:41.367038 kernel: kauditd_printk_skb: 15 callbacks suppressed
Oct  2 19:43:41.367104 kernel: audit: type=1334 audit(1696275820.826:86): prog-id=12 op=LOAD
Oct  2 19:43:41.367125 kernel: audit: type=1334 audit(1696275820.826:87): prog-id=3 op=UNLOAD
Oct  2 19:43:41.367145 kernel: audit: type=1334 audit(1696275820.827:88): prog-id=13 op=LOAD
Oct  2 19:43:41.367170 kernel: audit: type=1334 audit(1696275820.829:89): prog-id=14 op=LOAD
Oct  2 19:43:41.367194 kernel: audit: type=1334 audit(1696275820.829:90): prog-id=4 op=UNLOAD
Oct  2 19:43:41.367215 kernel: audit: type=1334 audit(1696275820.829:91): prog-id=5 op=UNLOAD
Oct  2 19:43:41.367236 kernel: audit: type=1334 audit(1696275820.833:92): prog-id=15 op=LOAD
Oct  2 19:43:41.367327 kernel: audit: type=1334 audit(1696275820.833:93): prog-id=12 op=UNLOAD
Oct  2 19:43:41.367348 kernel: audit: type=1334 audit(1696275820.836:94): prog-id=16 op=LOAD
Oct  2 19:43:41.367365 kernel: audit: type=1334 audit(1696275820.842:95): prog-id=17 op=LOAD
Oct  2 19:43:41.367385 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct  2 19:43:41.367406 systemd[1]: Stopped initrd-switch-root.service.
Oct  2 19:43:41.367430 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  2 19:43:41.367450 systemd[1]: Created slice system-addon\x2dconfig.slice.
Oct  2 19:43:41.367508 systemd[1]: Created slice system-addon\x2drun.slice.
Oct  2 19:43:41.367529 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice.
Oct  2 19:43:41.367550 systemd[1]: Created slice system-getty.slice.
Oct  2 19:43:41.367651 systemd[1]: Created slice system-modprobe.slice.
Oct  2 19:43:41.367675 systemd[1]: Created slice system-serial\x2dgetty.slice.
Oct  2 19:43:41.367753 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Oct  2 19:43:41.367928 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Oct  2 19:43:41.367952 systemd[1]: Created slice user.slice.
Oct  2 19:43:41.368048 systemd[1]: Started systemd-ask-password-console.path.
Oct  2 19:43:41.368185 systemd[1]: Started systemd-ask-password-wall.path.
Oct  2 19:43:41.368211 systemd[1]: Set up automount boot.automount.
Oct  2 19:43:41.368234 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Oct  2 19:43:41.368388 systemd[1]: Stopped target initrd-switch-root.target.
Oct  2 19:43:41.368445 systemd[1]: Stopped target initrd-fs.target.
Oct  2 19:43:41.368477 systemd[1]: Stopped target initrd-root-fs.target.
Oct  2 19:43:41.368497 systemd[1]: Reached target integritysetup.target.
Oct  2 19:43:41.368554 systemd[1]: Reached target remote-cryptsetup.target.
Oct  2 19:43:41.368576 systemd[1]: Reached target remote-fs.target.
Oct  2 19:43:41.368628 systemd[1]: Reached target slices.target.
Oct  2 19:43:41.368656 systemd[1]: Reached target swap.target.
Oct  2 19:43:41.369109 systemd[1]: Reached target torcx.target.
Oct  2 19:43:41.369160 systemd[1]: Reached target veritysetup.target.
Oct  2 19:43:41.369181 systemd[1]: Listening on systemd-coredump.socket.
Oct  2 19:43:41.369201 systemd[1]: Listening on systemd-initctl.socket.
Oct  2 19:43:41.369226 systemd[1]: Listening on systemd-networkd.socket.
Oct  2 19:43:41.369247 systemd[1]: Listening on systemd-udevd-control.socket.
Oct  2 19:43:41.369298 systemd[1]: Listening on systemd-udevd-kernel.socket.
Oct  2 19:43:41.369319 systemd[1]: Listening on systemd-userdbd.socket.
Oct  2 19:43:41.369339 systemd[1]: Mounting dev-hugepages.mount...
Oct  2 19:43:41.369359 systemd[1]: Mounting dev-mqueue.mount...
Oct  2 19:43:41.369538 systemd[1]: Mounting media.mount...
Oct  2 19:43:41.369567 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Oct  2 19:43:41.369589 systemd[1]: Mounting sys-kernel-debug.mount...
Oct  2 19:43:41.369610 systemd[1]: Mounting sys-kernel-tracing.mount...
Oct  2 19:43:41.369631 systemd[1]: Mounting tmp.mount...
Oct  2 19:43:41.370031 systemd[1]: Starting flatcar-tmpfiles.service...
Oct  2 19:43:41.370062 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Oct  2 19:43:41.370083 systemd[1]: Starting kmod-static-nodes.service...
Oct  2 19:43:41.370107 systemd[1]: Starting modprobe@configfs.service...
Oct  2 19:43:41.370128 systemd[1]: Starting modprobe@dm_mod.service...
Oct  2 19:43:41.370148 systemd[1]: Starting modprobe@drm.service...
Oct  2 19:43:41.370235 systemd[1]: Starting modprobe@efi_pstore.service...
Oct  2 19:43:41.370277 systemd[1]: Starting modprobe@fuse.service...
Oct  2 19:43:41.370329 systemd[1]: Starting modprobe@loop.service...
Oct  2 19:43:41.370351 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Oct  2 19:43:41.370378 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct  2 19:43:41.370399 systemd[1]: Stopped systemd-fsck-root.service.
Oct  2 19:43:41.370423 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Oct  2 19:43:41.370443 systemd[1]: Stopped systemd-fsck-usr.service.
Oct  2 19:43:41.370463 systemd[1]: Stopped systemd-journald.service.
Oct  2 19:43:41.370483 systemd[1]: Starting systemd-journald.service...
Oct  2 19:43:41.370504 systemd[1]: Starting systemd-modules-load.service...
Oct  2 19:43:41.370523 systemd[1]: Starting systemd-network-generator.service...
Oct  2 19:43:41.370544 systemd[1]: Starting systemd-remount-fs.service...
Oct  2 19:43:41.370564 systemd[1]: Starting systemd-udev-trigger.service...
Oct  2 19:43:41.370584 systemd[1]: verity-setup.service: Deactivated successfully.
Oct  2 19:43:41.370607 systemd[1]: Stopped verity-setup.service.
Oct  2 19:43:41.370627 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Oct  2 19:43:41.370647 kernel: loop: module loaded
Oct  2 19:43:41.370727 systemd[1]: Mounted dev-hugepages.mount.
Oct  2 19:43:41.370748 systemd[1]: Mounted dev-mqueue.mount.
Oct  2 19:43:41.370768 systemd[1]: Mounted media.mount.
Oct  2 19:43:41.370789 systemd[1]: Mounted sys-kernel-debug.mount.
Oct  2 19:43:41.370810 systemd[1]: Mounted sys-kernel-tracing.mount.
Oct  2 19:43:41.370835 systemd[1]: Mounted tmp.mount.
Oct  2 19:43:41.370857 systemd[1]: Finished kmod-static-nodes.service.
Oct  2 19:43:41.370880 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 19:43:41.370902 systemd[1]: Finished modprobe@configfs.service.
Oct  2 19:43:41.370923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Oct  2 19:43:41.370943 systemd[1]: Finished modprobe@dm_mod.service.
Oct  2 19:43:41.370968 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  2 19:43:41.370989 systemd[1]: Finished modprobe@drm.service.
Oct  2 19:43:41.371011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  2 19:43:41.371035 systemd[1]: Finished modprobe@efi_pstore.service.
Oct  2 19:43:41.371056 systemd[1]: modprobe@loop.service: Deactivated successfully.
Oct  2 19:43:41.371078 systemd[1]: Finished modprobe@loop.service.
Oct  2 19:43:41.371098 kernel: fuse: init (API version 7.34)
Oct  2 19:43:41.371118 systemd[1]: Finished systemd-modules-load.service.
Oct  2 19:43:41.371140 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  2 19:43:41.371164 systemd[1]: Finished modprobe@fuse.service.
Oct  2 19:43:41.371186 systemd[1]: Finished systemd-network-generator.service.
Oct  2 19:43:41.371207 systemd[1]: Finished systemd-remount-fs.service.
Oct  2 19:43:41.371227 systemd[1]: Reached target network-pre.target.
Oct  2 19:43:41.371248 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Oct  2 19:43:41.371291 systemd[1]: Mounting sys-kernel-config.mount...
Oct  2 19:43:41.371310 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Oct  2 19:43:41.372904 systemd[1]: Starting systemd-hwdb-update.service...
Oct  2 19:43:41.372931 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  2 19:43:41.373059 systemd[1]: Starting systemd-random-seed.service...
Oct  2 19:43:41.373083 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Oct  2 19:43:41.373103 systemd[1]: Starting systemd-sysctl.service...
Oct  2 19:43:41.373124 systemd[1]: Finished flatcar-tmpfiles.service.
Oct  2 19:43:41.373151 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Oct  2 19:43:41.373176 systemd[1]: Mounted sys-kernel-config.mount.
Oct  2 19:43:41.373193 systemd[1]: Starting systemd-sysusers.service...
Oct  2 19:43:41.373213 systemd[1]: Finished systemd-random-seed.service.
Oct  2 19:43:41.373234 systemd[1]: Reached target first-boot-complete.target.
Oct  2 19:43:41.373287 systemd-journald[1396]: Journal started
Oct  2 19:43:41.373368 systemd-journald[1396]: Runtime Journal (/run/log/journal/ec2fdc29d71769197ae6cfdf8484a51e) is 4.8M, max 38.7M, 33.9M free.
Oct  2 19:43:41.373414 systemd[1]: Finished systemd-sysctl.service.
Oct  2 19:43:36.057000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  2 19:43:36.248000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Oct  2 19:43:36.248000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Oct  2 19:43:36.248000 audit: BPF prog-id=10 op=LOAD
Oct  2 19:43:36.248000 audit: BPF prog-id=10 op=UNLOAD
Oct  2 19:43:36.248000 audit: BPF prog-id=11 op=LOAD
Oct  2 19:43:36.248000 audit: BPF prog-id=11 op=UNLOAD
Oct  2 19:43:40.826000 audit: BPF prog-id=12 op=LOAD
Oct  2 19:43:40.826000 audit: BPF prog-id=3 op=UNLOAD
Oct  2 19:43:40.827000 audit: BPF prog-id=13 op=LOAD
Oct  2 19:43:40.829000 audit: BPF prog-id=14 op=LOAD
Oct  2 19:43:40.829000 audit: BPF prog-id=4 op=UNLOAD
Oct  2 19:43:40.829000 audit: BPF prog-id=5 op=UNLOAD
Oct  2 19:43:40.833000 audit: BPF prog-id=15 op=LOAD
Oct  2 19:43:40.833000 audit: BPF prog-id=12 op=UNLOAD
Oct  2 19:43:40.836000 audit: BPF prog-id=16 op=LOAD
Oct  2 19:43:40.842000 audit: BPF prog-id=17 op=LOAD
Oct  2 19:43:40.842000 audit: BPF prog-id=13 op=UNLOAD
Oct  2 19:43:40.842000 audit: BPF prog-id=14 op=UNLOAD
Oct  2 19:43:40.844000 audit: BPF prog-id=18 op=LOAD
Oct  2 19:43:40.844000 audit: BPF prog-id=15 op=UNLOAD
Oct  2 19:43:40.845000 audit: BPF prog-id=19 op=LOAD
Oct  2 19:43:40.846000 audit: BPF prog-id=20 op=LOAD
Oct  2 19:43:40.846000 audit: BPF prog-id=16 op=UNLOAD
Oct  2 19:43:40.846000 audit: BPF prog-id=17 op=UNLOAD
Oct  2 19:43:40.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:40.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:40.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:40.855000 audit: BPF prog-id=18 op=UNLOAD
Oct  2 19:43:41.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.090000 audit: BPF prog-id=21 op=LOAD
Oct  2 19:43:41.090000 audit: BPF prog-id=22 op=LOAD
Oct  2 19:43:41.090000 audit: BPF prog-id=23 op=LOAD
Oct  2 19:43:41.090000 audit: BPF prog-id=19 op=UNLOAD
Oct  2 19:43:41.090000 audit: BPF prog-id=20 op=UNLOAD
Oct  2 19:43:41.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.376499 systemd[1]: Started systemd-journald.service.
Oct  2 19:43:41.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.363000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Oct  2 19:43:41.363000 audit[1396]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd06229ca0 a2=4000 a3=7ffd06229d3c items=0 ppid=1 pid=1396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:43:41.363000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Oct  2 19:43:41.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:40.824022 systemd[1]: Queued start job for default target multi-user.target.
Oct  2 19:43:36.466653 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]"
Oct  2 19:43:41.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:40.846709 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  2 19:43:36.467470 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Oct  2 19:43:36.467503 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Oct  2 19:43:36.467554 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Oct  2 19:43:36.467571 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="skipped missing lower profile" missing profile=oem
Oct  2 19:43:36.467620 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Oct  2 19:43:36.467640 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Oct  2 19:43:36.467911 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Oct  2 19:43:41.381370 systemd[1]: Starting systemd-journal-flush.service...
Oct  2 19:43:36.467961 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Oct  2 19:43:36.467981 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Oct  2 19:43:36.468765 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Oct  2 19:43:36.468820 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Oct  2 19:43:36.468861 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0
Oct  2 19:43:36.468885 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Oct  2 19:43:36.468913 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0
Oct  2 19:43:36.468935 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Oct  2 19:43:40.168273 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:40Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Oct  2 19:43:40.168567 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:40Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Oct  2 19:43:40.168690 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:40Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Oct  2 19:43:40.168880 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:40Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Oct  2 19:43:40.168931 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:40Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Oct  2 19:43:40.168990 /usr/lib/systemd/system-generators/torcx-generator[1319]: time="2023-10-02T19:43:40Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Oct  2 19:43:41.414570 systemd-journald[1396]: Time spent on flushing to /var/log/journal/ec2fdc29d71769197ae6cfdf8484a51e is 58.399ms for 1212 entries.
Oct  2 19:43:41.414570 systemd-journald[1396]: System Journal (/var/log/journal/ec2fdc29d71769197ae6cfdf8484a51e) is 8.0M, max 195.6M, 187.6M free.
Oct  2 19:43:41.489625 systemd-journald[1396]: Received client request to flush runtime journal.
Oct  2 19:43:41.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.441228 systemd[1]: Finished systemd-udev-trigger.service.
Oct  2 19:43:41.444485 systemd[1]: Starting systemd-udev-settle.service...
Oct  2 19:43:41.491154 udevadm[1436]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Oct  2 19:43:41.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:41.491146 systemd[1]: Finished systemd-journal-flush.service.
Oct  2 19:43:41.524346 systemd[1]: Finished systemd-sysusers.service.
Oct  2 19:43:41.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:42.349907 systemd[1]: Finished systemd-hwdb-update.service.
Oct  2 19:43:42.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:42.352000 audit: BPF prog-id=24 op=LOAD
Oct  2 19:43:42.352000 audit: BPF prog-id=25 op=LOAD
Oct  2 19:43:42.352000 audit: BPF prog-id=7 op=UNLOAD
Oct  2 19:43:42.352000 audit: BPF prog-id=8 op=UNLOAD
Oct  2 19:43:42.354478 systemd[1]: Starting systemd-udevd.service...
Oct  2 19:43:42.411365 systemd-udevd[1438]: Using default interface naming scheme 'v252'.
Oct  2 19:43:42.479579 systemd[1]: Started systemd-udevd.service.
Oct  2 19:43:42.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:42.482000 audit: BPF prog-id=26 op=LOAD
Oct  2 19:43:42.484905 systemd[1]: Starting systemd-networkd.service...
Oct  2 19:43:42.519000 audit: BPF prog-id=27 op=LOAD
Oct  2 19:43:42.519000 audit: BPF prog-id=28 op=LOAD
Oct  2 19:43:42.519000 audit: BPF prog-id=29 op=LOAD
Oct  2 19:43:42.521146 systemd[1]: Starting systemd-userdbd.service...
Oct  2 19:43:42.592590 (udev-worker)[1445]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 19:43:42.601282 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Oct  2 19:43:42.612695 systemd[1]: Started systemd-userdbd.service.
Oct  2 19:43:42.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:42.736294 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Oct  2 19:43:42.744016 kernel: ACPI: button: Power Button [PWRF]
Oct  2 19:43:42.744178 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3
Oct  2 19:43:42.756285 kernel: ACPI: button: Sleep Button [SLPF]
Oct  2 19:43:42.800033 systemd-networkd[1446]: lo: Link UP
Oct  2 19:43:42.800428 systemd-networkd[1446]: lo: Gained carrier
Oct  2 19:43:42.801125 systemd-networkd[1446]: Enumeration completed
Oct  2 19:43:42.801361 systemd[1]: Started systemd-networkd.service.
Oct  2 19:43:42.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:42.811945 systemd-networkd[1446]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Oct  2 19:43:42.813969 systemd[1]: Starting systemd-networkd-wait-online.service...
Oct  2 19:43:42.818279 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Oct  2 19:43:42.819114 systemd-networkd[1446]: eth0: Link UP
Oct  2 19:43:42.819297 systemd-networkd[1446]: eth0: Gained carrier
Oct  2 19:43:42.829423 systemd-networkd[1446]: eth0: DHCPv4 address 172.31.22.191/20, gateway 172.31.16.1 acquired from 172.31.16.1
Oct  2 19:43:42.809000 audit[1455]: AVC avc:  denied  { confidentiality } for  pid=1455 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Oct  2 19:43:42.809000 audit[1455]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563577002fd0 a1=32194 a2=7f00b2ef1bc5 a3=5 items=106 ppid=1438 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:43:42.809000 audit: CWD cwd="/"
Oct  2 19:43:42.809000 audit: PATH item=0 name=(null) inode=15097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=1 name=(null) inode=15098 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=2 name=(null) inode=15097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=3 name=(null) inode=15099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=4 name=(null) inode=15097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=5 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=6 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=7 name=(null) inode=15101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=8 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=9 name=(null) inode=15102 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=10 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=11 name=(null) inode=15103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=12 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=13 name=(null) inode=15104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=14 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=15 name=(null) inode=15105 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=16 name=(null) inode=15097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=17 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=18 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=19 name=(null) inode=15107 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=20 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=21 name=(null) inode=15108 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=22 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=23 name=(null) inode=15109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=24 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=25 name=(null) inode=15110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=26 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=27 name=(null) inode=15111 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=28 name=(null) inode=15097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=29 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=30 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=31 name=(null) inode=15113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=32 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=33 name=(null) inode=15114 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=34 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=35 name=(null) inode=15115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=36 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=37 name=(null) inode=15116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=38 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=39 name=(null) inode=15117 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=40 name=(null) inode=15097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=41 name=(null) inode=15118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=42 name=(null) inode=15118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=43 name=(null) inode=15119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=44 name=(null) inode=15118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=45 name=(null) inode=15120 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=46 name=(null) inode=15118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=47 name=(null) inode=15121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=48 name=(null) inode=15118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=49 name=(null) inode=15122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=50 name=(null) inode=15118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=51 name=(null) inode=15123 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=52 name=(null) inode=42 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=53 name=(null) inode=15124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=54 name=(null) inode=15124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=55 name=(null) inode=15125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=56 name=(null) inode=15124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=57 name=(null) inode=15126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=58 name=(null) inode=15124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=59 name=(null) inode=15127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=60 name=(null) inode=15127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=61 name=(null) inode=15128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=62 name=(null) inode=15127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=63 name=(null) inode=15129 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=64 name=(null) inode=15127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=65 name=(null) inode=15130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=66 name=(null) inode=15127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=67 name=(null) inode=15131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=68 name=(null) inode=15127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=69 name=(null) inode=15132 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=70 name=(null) inode=15124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=71 name=(null) inode=15133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=72 name=(null) inode=15133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=73 name=(null) inode=15134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=74 name=(null) inode=15133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=75 name=(null) inode=15135 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=76 name=(null) inode=15133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=77 name=(null) inode=15136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=78 name=(null) inode=15133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=79 name=(null) inode=15137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=80 name=(null) inode=15133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=81 name=(null) inode=15138 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=82 name=(null) inode=15124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=83 name=(null) inode=15139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=84 name=(null) inode=15139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=85 name=(null) inode=15140 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=86 name=(null) inode=15139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=87 name=(null) inode=15141 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=88 name=(null) inode=15139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=89 name=(null) inode=15142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=90 name=(null) inode=15139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=91 name=(null) inode=15143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=92 name=(null) inode=15139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=93 name=(null) inode=15144 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=94 name=(null) inode=15124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=95 name=(null) inode=15145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=96 name=(null) inode=15145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=97 name=(null) inode=15146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=98 name=(null) inode=15145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=99 name=(null) inode=15147 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=100 name=(null) inode=15145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=101 name=(null) inode=15148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=102 name=(null) inode=15145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=103 name=(null) inode=15149 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=104 name=(null) inode=15145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PATH item=105 name=(null) inode=15150 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Oct  2 19:43:42.809000 audit: PROCTITLE proctitle="(udev-worker)"
Oct  2 19:43:42.861279 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255
Oct  2 19:43:42.871298 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4
Oct  2 19:43:42.884282 kernel: mousedev: PS/2 mouse device common for all mice
Oct  2 19:43:42.894348 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1451)
Oct  2 19:43:42.999190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Oct  2 19:43:43.078630 systemd[1]: Finished systemd-udev-settle.service.
Oct  2 19:43:43.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.081059 systemd[1]: Starting lvm2-activation-early.service...
Oct  2 19:43:43.118888 lvm[1552]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Oct  2 19:43:43.143408 systemd[1]: Finished lvm2-activation-early.service.
Oct  2 19:43:43.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.144759 systemd[1]: Reached target cryptsetup.target.
Oct  2 19:43:43.147006 systemd[1]: Starting lvm2-activation.service...
Oct  2 19:43:43.152672 lvm[1553]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Oct  2 19:43:43.176508 systemd[1]: Finished lvm2-activation.service.
Oct  2 19:43:43.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.177696 systemd[1]: Reached target local-fs-pre.target.
Oct  2 19:43:43.178747 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  2 19:43:43.178787 systemd[1]: Reached target local-fs.target.
Oct  2 19:43:43.179759 systemd[1]: Reached target machines.target.
Oct  2 19:43:43.182026 systemd[1]: Starting ldconfig.service...
Oct  2 19:43:43.183995 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Oct  2 19:43:43.184081 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 19:43:43.186441 systemd[1]: Starting systemd-boot-update.service...
Oct  2 19:43:43.191372 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Oct  2 19:43:43.194843 systemd[1]: Starting systemd-machine-id-commit.service...
Oct  2 19:43:43.198046 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Oct  2 19:43:43.198137 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Oct  2 19:43:43.203428 systemd[1]: Starting systemd-tmpfiles-setup.service...
Oct  2 19:43:43.216832 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1555 (bootctl)
Oct  2 19:43:43.221592 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Oct  2 19:43:43.235947 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Oct  2 19:43:43.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.266865 systemd-tmpfiles[1558]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Oct  2 19:43:43.286490 systemd-tmpfiles[1558]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Oct  2 19:43:43.297495 systemd-tmpfiles[1558]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Oct  2 19:43:43.412955 systemd-fsck[1563]: fsck.fat 4.2 (2021-01-31)
Oct  2 19:43:43.412955 systemd-fsck[1563]: /dev/nvme0n1p1: 789 files, 115069/258078 clusters
Oct  2 19:43:43.417163 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Oct  2 19:43:43.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.420853 systemd[1]: Mounting boot.mount...
Oct  2 19:43:43.436197 systemd[1]: Mounted boot.mount.
Oct  2 19:43:43.472312 systemd[1]: Finished systemd-boot-update.service.
Oct  2 19:43:43.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.575594 systemd[1]: Finished systemd-tmpfiles-setup.service.
Oct  2 19:43:43.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.578282 systemd[1]: Starting audit-rules.service...
Oct  2 19:43:43.582588 systemd[1]: Starting clean-ca-certificates.service...
Oct  2 19:43:43.586563 systemd[1]: Starting systemd-journal-catalog-update.service...
Oct  2 19:43:43.592000 audit: BPF prog-id=30 op=LOAD
Oct  2 19:43:43.593809 systemd[1]: Starting systemd-resolved.service...
Oct  2 19:43:43.596000 audit: BPF prog-id=31 op=LOAD
Oct  2 19:43:43.598555 systemd[1]: Starting systemd-timesyncd.service...
Oct  2 19:43:43.603571 systemd[1]: Starting systemd-update-utmp.service...
Oct  2 19:43:43.630810 systemd[1]: Finished clean-ca-certificates.service.
Oct  2 19:43:43.632470 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Oct  2 19:43:43.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.637000 audit[1584]: SYSTEM_BOOT pid=1584 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.641029 systemd[1]: Finished systemd-update-utmp.service.
Oct  2 19:43:43.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.700938 systemd[1]: Finished systemd-journal-catalog-update.service.
Oct  2 19:43:43.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:43.756000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Oct  2 19:43:43.756000 audit[1598]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd4b51e090 a2=420 a3=0 items=0 ppid=1578 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:43:43.756000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Oct  2 19:43:43.758000 augenrules[1598]: No rules
Oct  2 19:43:43.757988 systemd[1]: Finished audit-rules.service.
Oct  2 19:43:43.776383 systemd-resolved[1582]: Positive Trust Anchors:
Oct  2 19:43:43.776775 systemd-resolved[1582]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Oct  2 19:43:43.776898 systemd-resolved[1582]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Oct  2 19:43:43.780769 systemd[1]: Started systemd-timesyncd.service.
Oct  2 19:43:43.782019 systemd[1]: Reached target time-set.target.
Oct  2 19:43:43.829703 systemd-resolved[1582]: Defaulting to hostname 'linux'.
Oct  2 19:43:43.834462 systemd[1]: Started systemd-resolved.service.
Oct  2 19:43:43.835885 systemd[1]: Reached target network.target.
Oct  2 19:43:43.837049 systemd[1]: Reached target nss-lookup.target.
Oct  2 19:43:43.976432 systemd-networkd[1446]: eth0: Gained IPv6LL
Oct  2 19:43:43.980132 systemd[1]: Finished systemd-networkd-wait-online.service.
Oct  2 19:43:43.982814 systemd[1]: Reached target network-online.target.
Oct  2 19:43:44.824444 systemd-resolved[1582]: Clock change detected. Flushing caches.
Oct  2 19:43:44.824904 systemd-timesyncd[1583]: Contacted time server 173.255.243.207:123 (0.flatcar.pool.ntp.org).
Oct  2 19:43:44.825436 systemd-timesyncd[1583]: Initial clock synchronization to Mon 2023-10-02 19:43:44.824284 UTC.
Oct  2 19:43:44.882770 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Oct  2 19:43:44.883438 systemd[1]: Finished systemd-machine-id-commit.service.
Oct  2 19:43:45.124654 ldconfig[1554]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Oct  2 19:43:45.138993 systemd[1]: Finished ldconfig.service.
Oct  2 19:43:45.145334 systemd[1]: Starting systemd-update-done.service...
Oct  2 19:43:45.159624 systemd[1]: Finished systemd-update-done.service.
Oct  2 19:43:45.161092 systemd[1]: Reached target sysinit.target.
Oct  2 19:43:45.163433 systemd[1]: Started motdgen.path.
Oct  2 19:43:45.165213 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Oct  2 19:43:45.167153 systemd[1]: Started logrotate.timer.
Oct  2 19:43:45.169368 systemd[1]: Started mdadm.timer.
Oct  2 19:43:45.170324 systemd[1]: Started systemd-tmpfiles-clean.timer.
Oct  2 19:43:45.171605 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Oct  2 19:43:45.171634 systemd[1]: Reached target paths.target.
Oct  2 19:43:45.172763 systemd[1]: Reached target timers.target.
Oct  2 19:43:45.174228 systemd[1]: Listening on dbus.socket.
Oct  2 19:43:45.176526 systemd[1]: Starting docker.socket...
Oct  2 19:43:45.180767 systemd[1]: Listening on sshd.socket.
Oct  2 19:43:45.181910 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 19:43:45.182460 systemd[1]: Listening on docker.socket.
Oct  2 19:43:45.183740 systemd[1]: Reached target sockets.target.
Oct  2 19:43:45.185120 systemd[1]: Reached target basic.target.
Oct  2 19:43:45.186751 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Oct  2 19:43:45.186806 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Oct  2 19:43:45.188975 systemd[1]: Started amazon-ssm-agent.service.
Oct  2 19:43:45.198114 systemd[1]: Starting containerd.service...
Oct  2 19:43:45.226099 systemd[1]: Starting coreos-metadata-sshkeys@core.service...
Oct  2 19:43:45.230295 systemd[1]: Starting dbus.service...
Oct  2 19:43:45.242998 systemd[1]: Starting enable-oem-cloudinit.service...
Oct  2 19:43:45.247731 systemd[1]: Starting extend-filesystems.service...
Oct  2 19:43:45.249704 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Oct  2 19:43:45.258456 systemd[1]: Starting motdgen.service...
Oct  2 19:43:45.261513 systemd[1]: Started nvidia.service.
Oct  2 19:43:45.321569 jq[1613]: false
Oct  2 19:43:45.269858 systemd[1]: Starting prepare-cni-plugins.service...
Oct  2 19:43:45.278369 systemd[1]: Starting prepare-critools.service...
Oct  2 19:43:45.281788 systemd[1]: Starting ssh-key-proc-cmdline.service...
Oct  2 19:43:45.284833 systemd[1]: Starting sshd-keygen.service...
Oct  2 19:43:45.291067 systemd[1]: Starting systemd-logind.service...
Oct  2 19:43:45.292711 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 19:43:45.292787 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Oct  2 19:43:45.293578 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Oct  2 19:43:45.294785 systemd[1]: Starting update-engine.service...
Oct  2 19:43:45.300239 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Oct  2 19:43:45.361117 jq[1624]: true
Oct  2 19:43:45.308579 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Oct  2 19:43:45.308837 systemd[1]: Finished ssh-key-proc-cmdline.service.
Oct  2 19:43:45.313436 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Oct  2 19:43:45.313673 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Oct  2 19:43:45.417510 tar[1628]: ./
Oct  2 19:43:45.417510 tar[1628]: ./macvlan
Oct  2 19:43:45.429256 tar[1634]: crictl
Oct  2 19:43:45.476256 jq[1635]: true
Oct  2 19:43:45.574606 extend-filesystems[1614]: Found nvme0n1
Oct  2 19:43:45.577662 dbus-daemon[1612]: [system] SELinux support is enabled
Oct  2 19:43:45.581009 systemd[1]: Started dbus.service.
Oct  2 19:43:45.582472 extend-filesystems[1614]: Found nvme0n1p1
Oct  2 19:43:45.583893 extend-filesystems[1614]: Found nvme0n1p2
Oct  2 19:43:45.585133 extend-filesystems[1614]: Found nvme0n1p3
Oct  2 19:43:45.585292 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Oct  2 19:43:45.585328 systemd[1]: Reached target system-config.target.
Oct  2 19:43:45.587511 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Oct  2 19:43:45.587540 systemd[1]: Reached target user-config.target.
Oct  2 19:43:45.591348 extend-filesystems[1614]: Found usr
Oct  2 19:43:45.605036 extend-filesystems[1614]: Found nvme0n1p4
Oct  2 19:43:45.606576 extend-filesystems[1614]: Found nvme0n1p6
Oct  2 19:43:45.608100 extend-filesystems[1614]: Found nvme0n1p7
Oct  2 19:43:45.609773 extend-filesystems[1614]: Found nvme0n1p9
Oct  2 19:43:45.615469 extend-filesystems[1614]: Checking size of /dev/nvme0n1p9
Oct  2 19:43:45.628169 systemd[1]: motdgen.service: Deactivated successfully.
Oct  2 19:43:45.628475 systemd[1]: Finished motdgen.service.
Oct  2 19:43:45.658689 bash[1671]: Updated "/home/core/.ssh/authorized_keys"
Oct  2 19:43:45.660860 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Oct  2 19:43:45.663193 dbus-daemon[1612]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1446 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Oct  2 19:43:45.667309 extend-filesystems[1614]: Resized partition /dev/nvme0n1p9
Oct  2 19:43:45.695267 amazon-ssm-agent[1609]: 2023/10/02 19:43:45 Failed to load instance info from vault. RegistrationKey does not exist.
Oct  2 19:43:45.695888 dbus-daemon[1612]: [system] Successfully activated service 'org.freedesktop.systemd1'
Oct  2 19:43:45.696898 amazon-ssm-agent[1609]: Initializing new seelog logger
Oct  2 19:43:45.696898 amazon-ssm-agent[1609]: New Seelog Logger Creation Complete
Oct  2 19:43:45.696898 amazon-ssm-agent[1609]: 2023/10/02 19:43:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Oct  2 19:43:45.696898 amazon-ssm-agent[1609]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Oct  2 19:43:45.696898 amazon-ssm-agent[1609]: 2023/10/02 19:43:45 processing appconfig overrides
Oct  2 19:43:45.704590 systemd[1]: Starting systemd-hostnamed.service...
Oct  2 19:43:45.714715 extend-filesystems[1678]: resize2fs 1.46.5 (30-Dec-2021)
Oct  2 19:43:45.733044 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Oct  2 19:43:45.809137 update_engine[1623]: I1002 19:43:45.805164  1623 main.cc:92] Flatcar Update Engine starting
Oct  2 19:43:45.822421 systemd[1]: Started update-engine.service.
Oct  2 19:43:45.824439 update_engine[1623]: I1002 19:43:45.823925  1623 update_check_scheduler.cc:74] Next update check in 6m37s
Oct  2 19:43:45.828078 systemd[1]: Started locksmithd.service.
Oct  2 19:43:45.867416 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Oct  2 19:43:45.928160 extend-filesystems[1678]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Oct  2 19:43:45.928160 extend-filesystems[1678]: old_desc_blocks = 1, new_desc_blocks = 1
Oct  2 19:43:45.928160 extend-filesystems[1678]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Oct  2 19:43:45.935200 extend-filesystems[1614]: Resized filesystem in /dev/nvme0n1p9
Oct  2 19:43:45.928525 systemd[1]: extend-filesystems.service: Deactivated successfully.
Oct  2 19:43:45.936846 env[1632]: time="2023-10-02T19:43:45.935681337Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Oct  2 19:43:45.928730 systemd[1]: Finished extend-filesystems.service.
Oct  2 19:43:45.964950 systemd[1]: nvidia.service: Deactivated successfully.
Oct  2 19:43:45.979068 systemd-logind[1622]: Watching system buttons on /dev/input/event1 (Power Button)
Oct  2 19:43:45.979456 systemd-logind[1622]: Watching system buttons on /dev/input/event2 (Sleep Button)
Oct  2 19:43:45.979586 systemd-logind[1622]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Oct  2 19:43:45.979950 systemd-logind[1622]: New seat seat0.
Oct  2 19:43:45.983227 systemd[1]: Started systemd-logind.service.
Oct  2 19:43:46.018185 tar[1628]: ./static
Oct  2 19:43:46.167912 env[1632]: time="2023-10-02T19:43:46.167826418Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Oct  2 19:43:46.173746 env[1632]: time="2023-10-02T19:43:46.173696130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Oct  2 19:43:46.176783 env[1632]: time="2023-10-02T19:43:46.176734029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Oct  2 19:43:46.176947 env[1632]: time="2023-10-02T19:43:46.176928435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Oct  2 19:43:46.177318 env[1632]: time="2023-10-02T19:43:46.177290243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Oct  2 19:43:46.182759 env[1632]: time="2023-10-02T19:43:46.182546260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Oct  2 19:43:46.184591 env[1632]: time="2023-10-02T19:43:46.184546306Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Oct  2 19:43:46.184853 env[1632]: time="2023-10-02T19:43:46.184823961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Oct  2 19:43:46.185096 env[1632]: time="2023-10-02T19:43:46.185075664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Oct  2 19:43:46.186016 env[1632]: time="2023-10-02T19:43:46.185985417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Oct  2 19:43:46.186266 env[1632]: time="2023-10-02T19:43:46.186234464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Oct  2 19:43:46.186328 env[1632]: time="2023-10-02T19:43:46.186267901Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Oct  2 19:43:46.186387 env[1632]: time="2023-10-02T19:43:46.186342380Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Oct  2 19:43:46.186387 env[1632]: time="2023-10-02T19:43:46.186360318Z" level=info msg="metadata content store policy set" policy=shared
Oct  2 19:43:46.206714 env[1632]: time="2023-10-02T19:43:46.206587750Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Oct  2 19:43:46.206887 env[1632]: time="2023-10-02T19:43:46.206748623Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Oct  2 19:43:46.206887 env[1632]: time="2023-10-02T19:43:46.206771489Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Oct  2 19:43:46.206887 env[1632]: time="2023-10-02T19:43:46.206838285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Oct  2 19:43:46.206887 env[1632]: time="2023-10-02T19:43:46.206859452Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Oct  2 19:43:46.207053 env[1632]: time="2023-10-02T19:43:46.206891694Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Oct  2 19:43:46.207053 env[1632]: time="2023-10-02T19:43:46.206910506Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Oct  2 19:43:46.207053 env[1632]: time="2023-10-02T19:43:46.206931322Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Oct  2 19:43:46.207053 env[1632]: time="2023-10-02T19:43:46.206966654Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Oct  2 19:43:46.207053 env[1632]: time="2023-10-02T19:43:46.206986623Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Oct  2 19:43:46.207053 env[1632]: time="2023-10-02T19:43:46.207006186Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Oct  2 19:43:46.207053 env[1632]: time="2023-10-02T19:43:46.207041169Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Oct  2 19:43:46.207387 env[1632]: time="2023-10-02T19:43:46.207332650Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Oct  2 19:43:46.207519 env[1632]: time="2023-10-02T19:43:46.207481632Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Oct  2 19:43:46.207955 env[1632]: time="2023-10-02T19:43:46.207934119Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Oct  2 19:43:46.208016 env[1632]: time="2023-10-02T19:43:46.207986372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208016 env[1632]: time="2023-10-02T19:43:46.208010664Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Oct  2 19:43:46.208122 env[1632]: time="2023-10-02T19:43:46.208094754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208180 env[1632]: time="2023-10-02T19:43:46.208141293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208180 env[1632]: time="2023-10-02T19:43:46.208162343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208362 env[1632]: time="2023-10-02T19:43:46.208184487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208415 env[1632]: time="2023-10-02T19:43:46.208384409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208415 env[1632]: time="2023-10-02T19:43:46.208408720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208503 env[1632]: time="2023-10-02T19:43:46.208429029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208503 env[1632]: time="2023-10-02T19:43:46.208460552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208589 env[1632]: time="2023-10-02T19:43:46.208502302Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Oct  2 19:43:46.208716 env[1632]: time="2023-10-02T19:43:46.208695159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208775 env[1632]: time="2023-10-02T19:43:46.208737467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208775 env[1632]: time="2023-10-02T19:43:46.208760533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.208851 env[1632]: time="2023-10-02T19:43:46.208778145Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Oct  2 19:43:46.208851 env[1632]: time="2023-10-02T19:43:46.208834181Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Oct  2 19:43:46.208930 env[1632]: time="2023-10-02T19:43:46.208852947Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Oct  2 19:43:46.208930 env[1632]: time="2023-10-02T19:43:46.208892194Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Oct  2 19:43:46.209005 env[1632]: time="2023-10-02T19:43:46.208938158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Oct  2 19:43:46.209383 env[1632]: time="2023-10-02T19:43:46.209301262Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.209402723Z" level=info msg="Connect containerd service"
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.209457090Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.210299129Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.210437799Z" level=info msg="Start subscribing containerd event"
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.210509758Z" level=info msg="Start recovering state"
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.210583350Z" level=info msg="Start event monitor"
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.210599820Z" level=info msg="Start snapshots syncer"
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.210613472Z" level=info msg="Start cni network conf syncer for default"
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.210625898Z" level=info msg="Start streaming server"
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.211111466Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Oct  2 19:43:46.212656 env[1632]: time="2023-10-02T19:43:46.211208309Z" level=info msg=serving... address=/run/containerd/containerd.sock
Oct  2 19:43:46.230635 systemd[1]: Started containerd.service.
Oct  2 19:43:46.247042 tar[1628]: ./vlan
Oct  2 19:43:46.257151 env[1632]: time="2023-10-02T19:43:46.257109118Z" level=info msg="containerd successfully booted in 0.351494s"
Oct  2 19:43:46.260243 dbus-daemon[1612]: [system] Successfully activated service 'org.freedesktop.hostname1'
Oct  2 19:43:46.260406 systemd[1]: Started systemd-hostnamed.service.
Oct  2 19:43:46.262639 dbus-daemon[1612]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1679 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Oct  2 19:43:46.266450 systemd[1]: Starting polkit.service...
Oct  2 19:43:46.310460 amazon-ssm-agent[1609]: 2023-10-02 19:43:46 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-02b440a24027c297f is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-02b440a24027c297f because no identity-based policy allows the ssm:UpdateInstanceInformation action
Oct  2 19:43:46.310460 amazon-ssm-agent[1609]:         status code: 400, request id: 3aff344f-3935-473a-bdaf-54c00b2e7354
Oct  2 19:43:46.310783 amazon-ssm-agent[1609]: 2023-10-02 19:43:46 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period
Oct  2 19:43:46.315046 polkitd[1743]: Started polkitd version 121
Oct  2 19:43:46.346622 polkitd[1743]: Loading rules from directory /etc/polkit-1/rules.d
Oct  2 19:43:46.347596 polkitd[1743]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct  2 19:43:46.354872 polkitd[1743]: Finished loading, compiling and executing 2 rules
Oct  2 19:43:46.358085 dbus-daemon[1612]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Oct  2 19:43:46.358298 systemd[1]: Started polkit.service.
Oct  2 19:43:46.359658 polkitd[1743]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct  2 19:43:46.380065 systemd-hostnamed[1679]: Hostname set to <ip-172-31-22-191> (transient)
Oct  2 19:43:46.380194 systemd-resolved[1582]: System hostname changed to 'ip-172-31-22-191'.
Oct  2 19:43:46.438667 tar[1628]: ./portmap
Oct  2 19:43:46.494674 coreos-metadata[1611]: Oct 02 19:43:46.494 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Oct  2 19:43:46.501632 coreos-metadata[1611]: Oct 02 19:43:46.501 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1
Oct  2 19:43:46.502930 coreos-metadata[1611]: Oct 02 19:43:46.502 INFO Fetch successful
Oct  2 19:43:46.503185 coreos-metadata[1611]: Oct 02 19:43:46.503 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1
Oct  2 19:43:46.504269 coreos-metadata[1611]: Oct 02 19:43:46.504 INFO Fetch successful
Oct  2 19:43:46.507397 unknown[1611]: wrote ssh authorized keys file for user: core
Oct  2 19:43:46.545516 update-ssh-keys[1783]: Updated "/home/core/.ssh/authorized_keys"
Oct  2 19:43:46.546561 systemd[1]: Finished coreos-metadata-sshkeys@core.service.
Oct  2 19:43:46.563626 tar[1628]: ./host-local
Oct  2 19:43:46.673383 tar[1628]: ./vrf
Oct  2 19:43:46.789205 tar[1628]: ./bridge
Oct  2 19:43:46.838400 systemd[1]: Finished prepare-critools.service.
Oct  2 19:43:46.875673 tar[1628]: ./tuning
Oct  2 19:43:46.913405 tar[1628]: ./firewall
Oct  2 19:43:46.962620 tar[1628]: ./host-device
Oct  2 19:43:47.007661 tar[1628]: ./sbr
Oct  2 19:43:47.049701 tar[1628]: ./loopback
Oct  2 19:43:47.096391 tar[1628]: ./dhcp
Oct  2 19:43:47.223889 tar[1628]: ./ptp
Oct  2 19:43:47.273238 tar[1628]: ./ipvlan
Oct  2 19:43:47.329004 tar[1628]: ./bandwidth
Oct  2 19:43:47.411600 systemd[1]: Finished prepare-cni-plugins.service.
Oct  2 19:43:47.531913 locksmithd[1688]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Oct  2 19:43:48.261334 sshd_keygen[1652]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Oct  2 19:43:48.292406 systemd[1]: Finished sshd-keygen.service.
Oct  2 19:43:48.295992 systemd[1]: Starting issuegen.service...
Oct  2 19:43:48.307319 systemd[1]: issuegen.service: Deactivated successfully.
Oct  2 19:43:48.307547 systemd[1]: Finished issuegen.service.
Oct  2 19:43:48.311265 systemd[1]: Starting systemd-user-sessions.service...
Oct  2 19:43:48.319763 systemd[1]: Finished systemd-user-sessions.service.
Oct  2 19:43:48.322816 systemd[1]: Started getty@tty1.service.
Oct  2 19:43:48.325656 systemd[1]: Started serial-getty@ttyS0.service.
Oct  2 19:43:48.328311 systemd[1]: Reached target getty.target.
Oct  2 19:43:48.330146 systemd[1]: Reached target multi-user.target.
Oct  2 19:43:48.333405 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Oct  2 19:43:48.347638 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  2 19:43:48.347817 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Oct  2 19:43:48.349793 systemd[1]: Startup finished in 781ms (kernel) + 10.061s (initrd) + 11.591s (userspace) = 22.435s.
Oct  2 19:43:54.759238 systemd[1]: Created slice system-sshd.slice.
Oct  2 19:43:54.761016 systemd[1]: Started sshd@0-172.31.22.191:22-139.178.89.65:50012.service.
Oct  2 19:43:55.003987 sshd[1821]: Accepted publickey for core from 139.178.89.65 port 50012 ssh2: RSA SHA256:2tpfHuF3vLKbRg5mH6Od9mbM0lpjrGjQfzRVUhbm/E8
Oct  2 19:43:55.006819 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Oct  2 19:43:55.021414 systemd[1]: Created slice user-500.slice.
Oct  2 19:43:55.023581 systemd[1]: Starting user-runtime-dir@500.service...
Oct  2 19:43:55.030022 systemd-logind[1622]: New session 1 of user core.
Oct  2 19:43:55.038687 systemd[1]: Finished user-runtime-dir@500.service.
Oct  2 19:43:55.042206 systemd[1]: Starting user@500.service...
Oct  2 19:43:55.048405 (systemd)[1824]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Oct  2 19:43:55.159598 systemd[1824]: Queued start job for default target default.target.
Oct  2 19:43:55.160609 systemd[1824]: Reached target paths.target.
Oct  2 19:43:55.160643 systemd[1824]: Reached target sockets.target.
Oct  2 19:43:55.160662 systemd[1824]: Reached target timers.target.
Oct  2 19:43:55.160679 systemd[1824]: Reached target basic.target.
Oct  2 19:43:55.160739 systemd[1824]: Reached target default.target.
Oct  2 19:43:55.160779 systemd[1824]: Startup finished in 103ms.
Oct  2 19:43:55.161717 systemd[1]: Started user@500.service.
Oct  2 19:43:55.163238 systemd[1]: Started session-1.scope.
Oct  2 19:43:55.310109 systemd[1]: Started sshd@1-172.31.22.191:22-139.178.89.65:50028.service.
Oct  2 19:43:55.484220 sshd[1833]: Accepted publickey for core from 139.178.89.65 port 50028 ssh2: RSA SHA256:2tpfHuF3vLKbRg5mH6Od9mbM0lpjrGjQfzRVUhbm/E8
Oct  2 19:43:55.486068 sshd[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Oct  2 19:43:55.493329 systemd[1]: Started session-2.scope.
Oct  2 19:43:55.493812 systemd-logind[1622]: New session 2 of user core.
Oct  2 19:43:55.628058 sshd[1833]: pam_unix(sshd:session): session closed for user core
Oct  2 19:43:55.632041 systemd[1]: sshd@1-172.31.22.191:22-139.178.89.65:50028.service: Deactivated successfully.
Oct  2 19:43:55.633174 systemd[1]: session-2.scope: Deactivated successfully.
Oct  2 19:43:55.636405 systemd-logind[1622]: Session 2 logged out. Waiting for processes to exit.
Oct  2 19:43:55.637407 systemd-logind[1622]: Removed session 2.
Oct  2 19:43:55.657349 systemd[1]: Started sshd@2-172.31.22.191:22-139.178.89.65:50044.service.
Oct  2 19:43:55.831468 sshd[1839]: Accepted publickey for core from 139.178.89.65 port 50044 ssh2: RSA SHA256:2tpfHuF3vLKbRg5mH6Od9mbM0lpjrGjQfzRVUhbm/E8
Oct  2 19:43:55.833182 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Oct  2 19:43:55.840834 systemd-logind[1622]: New session 3 of user core.
Oct  2 19:43:55.841456 systemd[1]: Started session-3.scope.
Oct  2 19:43:55.961713 sshd[1839]: pam_unix(sshd:session): session closed for user core
Oct  2 19:43:55.964921 systemd[1]: sshd@2-172.31.22.191:22-139.178.89.65:50044.service: Deactivated successfully.
Oct  2 19:43:55.966090 systemd[1]: session-3.scope: Deactivated successfully.
Oct  2 19:43:55.966898 systemd-logind[1622]: Session 3 logged out. Waiting for processes to exit.
Oct  2 19:43:55.967774 systemd-logind[1622]: Removed session 3.
Oct  2 19:43:55.998017 systemd[1]: Started sshd@3-172.31.22.191:22-139.178.89.65:55612.service.
Oct  2 19:43:56.187508 sshd[1845]: Accepted publickey for core from 139.178.89.65 port 55612 ssh2: RSA SHA256:2tpfHuF3vLKbRg5mH6Od9mbM0lpjrGjQfzRVUhbm/E8
Oct  2 19:43:56.189825 sshd[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Oct  2 19:43:56.213452 systemd-logind[1622]: New session 4 of user core.
Oct  2 19:43:56.214084 systemd[1]: Started session-4.scope.
Oct  2 19:43:56.351480 sshd[1845]: pam_unix(sshd:session): session closed for user core
Oct  2 19:43:56.355200 systemd-logind[1622]: Session 4 logged out. Waiting for processes to exit.
Oct  2 19:43:56.355574 systemd[1]: sshd@3-172.31.22.191:22-139.178.89.65:55612.service: Deactivated successfully.
Oct  2 19:43:56.357026 systemd[1]: session-4.scope: Deactivated successfully.
Oct  2 19:43:56.358096 systemd-logind[1622]: Removed session 4.
Oct  2 19:43:56.377235 systemd[1]: Started sshd@4-172.31.22.191:22-139.178.89.65:55616.service.
Oct  2 19:43:56.549103 sshd[1851]: Accepted publickey for core from 139.178.89.65 port 55616 ssh2: RSA SHA256:2tpfHuF3vLKbRg5mH6Od9mbM0lpjrGjQfzRVUhbm/E8
Oct  2 19:43:56.550110 sshd[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Oct  2 19:43:56.556557 systemd-logind[1622]: New session 5 of user core.
Oct  2 19:43:56.557120 systemd[1]: Started session-5.scope.
Oct  2 19:43:56.676846 sudo[1854]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Oct  2 19:43:56.677138 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Oct  2 19:43:56.685830 dbus-daemon[1612]: \xd0\xcdjS~U:  received setenforce notice (enforcing=278081376)
Oct  2 19:43:56.688353 sudo[1854]: pam_unix(sudo:session): session closed for user root
Oct  2 19:43:56.712237 sshd[1851]: pam_unix(sshd:session): session closed for user core
Oct  2 19:43:56.717916 systemd[1]: sshd@4-172.31.22.191:22-139.178.89.65:55616.service: Deactivated successfully.
Oct  2 19:43:56.718934 systemd[1]: session-5.scope: Deactivated successfully.
Oct  2 19:43:56.719671 systemd-logind[1622]: Session 5 logged out. Waiting for processes to exit.
Oct  2 19:43:56.720688 systemd-logind[1622]: Removed session 5.
Oct  2 19:43:56.740156 systemd[1]: Started sshd@5-172.31.22.191:22-139.178.89.65:55624.service.
Oct  2 19:43:56.912505 sshd[1858]: Accepted publickey for core from 139.178.89.65 port 55624 ssh2: RSA SHA256:2tpfHuF3vLKbRg5mH6Od9mbM0lpjrGjQfzRVUhbm/E8
Oct  2 19:43:56.913953 sshd[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Oct  2 19:43:56.918468 systemd-logind[1622]: New session 6 of user core.
Oct  2 19:43:56.919054 systemd[1]: Started session-6.scope.
Oct  2 19:43:57.025768 sudo[1862]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Oct  2 19:43:57.026065 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Oct  2 19:43:57.029947 sudo[1862]: pam_unix(sudo:session): session closed for user root
Oct  2 19:43:57.035315 sudo[1861]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Oct  2 19:43:57.035620 sudo[1861]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Oct  2 19:43:57.045597 systemd[1]: Stopping audit-rules.service...
Oct  2 19:43:57.046000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1
Oct  2 19:43:57.048416 kernel: kauditd_printk_skb: 186 callbacks suppressed
Oct  2 19:43:57.048470 kernel: audit: type=1305 audit(1696275837.046:169): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1
Oct  2 19:43:57.046000 audit[1865]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff23bfe0c0 a2=420 a3=0 items=0 ppid=1 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:43:57.057584 kernel: audit: type=1300 audit(1696275837.046:169): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff23bfe0c0 a2=420 a3=0 items=0 ppid=1 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:43:57.057665 kernel: audit: type=1327 audit(1696275837.046:169): proctitle=2F7362696E2F617564697463746C002D44
Oct  2 19:43:57.046000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44
Oct  2 19:43:57.057738 auditctl[1865]: No rules
Oct  2 19:43:57.065042 kernel: audit: type=1131 audit(1696275837.057:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.058545 systemd[1]: audit-rules.service: Deactivated successfully.
Oct  2 19:43:57.058743 systemd[1]: Stopped audit-rules.service.
Oct  2 19:43:57.066254 systemd[1]: Starting audit-rules.service...
Oct  2 19:43:57.085742 augenrules[1882]: No rules
Oct  2 19:43:57.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.087740 sudo[1861]: pam_unix(sudo:session): session closed for user root
Oct  2 19:43:57.086593 systemd[1]: Finished audit-rules.service.
Oct  2 19:43:57.087000 audit[1861]: USER_END pid=1861 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.097427 kernel: audit: type=1130 audit(1696275837.086:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.097710 kernel: audit: type=1106 audit(1696275837.087:172): pid=1861 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.097744 kernel: audit: type=1104 audit(1696275837.087:173): pid=1861 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.087000 audit[1861]: CRED_DISP pid=1861 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.110868 sshd[1858]: pam_unix(sshd:session): session closed for user core
Oct  2 19:43:57.112000 audit[1858]: USER_END pid=1858 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:43:57.115325 systemd[1]: sshd@5-172.31.22.191:22-139.178.89.65:55624.service: Deactivated successfully.
Oct  2 19:43:57.116224 systemd[1]: session-6.scope: Deactivated successfully.
Oct  2 19:43:57.117707 systemd-logind[1622]: Session 6 logged out. Waiting for processes to exit.
Oct  2 19:43:57.118850 systemd-logind[1622]: Removed session 6.
Oct  2 19:43:57.112000 audit[1858]: CRED_DISP pid=1858 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:43:57.124891 kernel: audit: type=1106 audit(1696275837.112:174): pid=1858 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:43:57.124969 kernel: audit: type=1104 audit(1696275837.112:175): pid=1858 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:43:57.125008 kernel: audit: type=1131 audit(1696275837.112:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.22.191:22-139.178.89.65:55624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.22.191:22-139.178.89.65:55624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.191:22-139.178.89.65:55632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.139383 systemd[1]: Started sshd@6-172.31.22.191:22-139.178.89.65:55632.service.
Oct  2 19:43:57.300000 audit[1888]: USER_ACCT pid=1888 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:43:57.301205 sshd[1888]: Accepted publickey for core from 139.178.89.65 port 55632 ssh2: RSA SHA256:2tpfHuF3vLKbRg5mH6Od9mbM0lpjrGjQfzRVUhbm/E8
Oct  2 19:43:57.302000 audit[1888]: CRED_ACQ pid=1888 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:43:57.302000 audit[1888]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebe00a070 a2=3 a3=0 items=0 ppid=1 pid=1888 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:43:57.302000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D
Oct  2 19:43:57.303560 sshd[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Oct  2 19:43:57.308651 systemd[1]: Started session-7.scope.
Oct  2 19:43:57.309264 systemd-logind[1622]: New session 7 of user core.
Oct  2 19:43:57.313000 audit[1888]: USER_START pid=1888 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:43:57.315000 audit[1890]: CRED_ACQ pid=1890 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:43:57.412000 audit[1891]: USER_ACCT pid=1891 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.412653 sudo[1891]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Oct  2 19:43:57.412000 audit[1891]: CRED_REFR pid=1891 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.412958 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Oct  2 19:43:57.414000 audit[1891]: USER_START pid=1891 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:57.997293 systemd[1]: Reloading.
Oct  2 19:43:58.100315 /usr/lib/systemd/system-generators/torcx-generator[1920]: time="2023-10-02T19:43:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]"
Oct  2 19:43:58.100359 /usr/lib/systemd/system-generators/torcx-generator[1920]: time="2023-10-02T19:43:58Z" level=info msg="torcx already run"
Oct  2 19:43:58.233461 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Oct  2 19:43:58.233507 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct  2 19:43:58.258988 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Oct  2 19:43:58.355000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.355000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.355000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.355000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.355000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.355000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.355000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.355000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.355000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.356000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.356000 audit: BPF prog-id=40 op=LOAD
Oct  2 19:43:58.356000 audit: BPF prog-id=32 op=UNLOAD
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit: BPF prog-id=41 op=LOAD
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit: BPF prog-id=42 op=LOAD
Oct  2 19:43:58.357000 audit: BPF prog-id=33 op=UNLOAD
Oct  2 19:43:58.357000 audit: BPF prog-id=34 op=UNLOAD
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.357000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit: BPF prog-id=43 op=LOAD
Oct  2 19:43:58.358000 audit: BPF prog-id=27 op=UNLOAD
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit: BPF prog-id=44 op=LOAD
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.358000 audit: BPF prog-id=45 op=LOAD
Oct  2 19:43:58.358000 audit: BPF prog-id=28 op=UNLOAD
Oct  2 19:43:58.358000 audit: BPF prog-id=29 op=UNLOAD
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.359000 audit: BPF prog-id=46 op=LOAD
Oct  2 19:43:58.359000 audit: BPF prog-id=30 op=UNLOAD
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.362000 audit: BPF prog-id=47 op=LOAD
Oct  2 19:43:58.362000 audit: BPF prog-id=31 op=UNLOAD
Oct  2 19:43:58.363000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.363000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.363000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.363000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit: BPF prog-id=48 op=LOAD
Oct  2 19:43:58.364000 audit: BPF prog-id=35 op=UNLOAD
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit: BPF prog-id=49 op=LOAD
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.364000 audit: BPF prog-id=50 op=LOAD
Oct  2 19:43:58.364000 audit: BPF prog-id=36 op=UNLOAD
Oct  2 19:43:58.364000 audit: BPF prog-id=37 op=UNLOAD
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.366000 audit: BPF prog-id=51 op=LOAD
Oct  2 19:43:58.366000 audit: BPF prog-id=38 op=UNLOAD
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.369000 audit: BPF prog-id=52 op=LOAD
Oct  2 19:43:58.369000 audit: BPF prog-id=26 op=UNLOAD
Oct  2 19:43:58.370000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.370000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.370000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.370000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.370000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.370000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.370000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.370000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.370000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit: BPF prog-id=53 op=LOAD
Oct  2 19:43:58.371000 audit: BPF prog-id=21 op=UNLOAD
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit: BPF prog-id=54 op=LOAD
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit: BPF prog-id=55 op=LOAD
Oct  2 19:43:58.371000 audit: BPF prog-id=22 op=UNLOAD
Oct  2 19:43:58.371000 audit: BPF prog-id=23 op=UNLOAD
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit: BPF prog-id=56 op=LOAD
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:58.371000 audit: BPF prog-id=57 op=LOAD
Oct  2 19:43:58.371000 audit: BPF prog-id=24 op=UNLOAD
Oct  2 19:43:58.371000 audit: BPF prog-id=25 op=UNLOAD
Oct  2 19:43:58.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:58.388618 systemd[1]: Started kubelet.service.
Oct  2 19:43:58.407584 systemd[1]: Starting coreos-metadata.service...
Oct  2 19:43:58.493655 kubelet[1972]: E1002 19:43:58.493394    1972 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"
Oct  2 19:43:58.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Oct  2 19:43:58.499982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 19:43:58.500185 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Oct  2 19:43:58.551654 coreos-metadata[1981]: Oct 02 19:43:58.549 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Oct  2 19:43:58.559995 coreos-metadata[1981]: Oct 02 19:43:58.559 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1
Oct  2 19:43:58.560973 coreos-metadata[1981]: Oct 02 19:43:58.560 INFO Fetch successful
Oct  2 19:43:58.561346 coreos-metadata[1981]: Oct 02 19:43:58.561 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1
Oct  2 19:43:58.562728 coreos-metadata[1981]: Oct 02 19:43:58.562 INFO Fetch successful
Oct  2 19:43:58.562985 coreos-metadata[1981]: Oct 02 19:43:58.562 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1
Oct  2 19:43:58.564195 coreos-metadata[1981]: Oct 02 19:43:58.564 INFO Fetch successful
Oct  2 19:43:58.564274 coreos-metadata[1981]: Oct 02 19:43:58.564 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1
Oct  2 19:43:58.564727 coreos-metadata[1981]: Oct 02 19:43:58.564 INFO Fetch successful
Oct  2 19:43:58.564832 coreos-metadata[1981]: Oct 02 19:43:58.564 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1
Oct  2 19:43:58.565610 coreos-metadata[1981]: Oct 02 19:43:58.565 INFO Fetch successful
Oct  2 19:43:58.565687 coreos-metadata[1981]: Oct 02 19:43:58.565 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1
Oct  2 19:43:58.566570 coreos-metadata[1981]: Oct 02 19:43:58.566 INFO Fetch successful
Oct  2 19:43:58.566687 coreos-metadata[1981]: Oct 02 19:43:58.566 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1
Oct  2 19:43:58.572445 coreos-metadata[1981]: Oct 02 19:43:58.572 INFO Fetch successful
Oct  2 19:43:58.572445 coreos-metadata[1981]: Oct 02 19:43:58.572 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1
Oct  2 19:43:58.573121 coreos-metadata[1981]: Oct 02 19:43:58.573 INFO Fetch successful
Oct  2 19:43:58.584746 systemd[1]: Finished coreos-metadata.service.
Oct  2 19:43:58.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:59.081332 systemd[1]: Stopped kubelet.service.
Oct  2 19:43:59.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:59.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:59.117308 systemd[1]: Reloading.
Oct  2 19:43:59.259664 /usr/lib/systemd/system-generators/torcx-generator[2041]: time="2023-10-02T19:43:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]"
Oct  2 19:43:59.259706 /usr/lib/systemd/system-generators/torcx-generator[2041]: time="2023-10-02T19:43:59Z" level=info msg="torcx already run"
Oct  2 19:43:59.343575 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Oct  2 19:43:59.343597 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct  2 19:43:59.364387 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Oct  2 19:43:59.448000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.448000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.448000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.448000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.448000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.448000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.448000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.448000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.448000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit: BPF prog-id=58 op=LOAD
Oct  2 19:43:59.449000 audit: BPF prog-id=40 op=UNLOAD
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit: BPF prog-id=59 op=LOAD
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.449000 audit: BPF prog-id=60 op=LOAD
Oct  2 19:43:59.449000 audit: BPF prog-id=41 op=UNLOAD
Oct  2 19:43:59.449000 audit: BPF prog-id=42 op=UNLOAD
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit: BPF prog-id=61 op=LOAD
Oct  2 19:43:59.454000 audit: BPF prog-id=43 op=UNLOAD
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit: BPF prog-id=62 op=LOAD
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.454000 audit: BPF prog-id=63 op=LOAD
Oct  2 19:43:59.454000 audit: BPF prog-id=44 op=UNLOAD
Oct  2 19:43:59.454000 audit: BPF prog-id=45 op=UNLOAD
Oct  2 19:43:59.459000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.459000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.459000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.459000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.459000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.460000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.460000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.460000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.460000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.460000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.460000 audit: BPF prog-id=64 op=LOAD
Oct  2 19:43:59.460000 audit: BPF prog-id=46 op=UNLOAD
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.471000 audit: BPF prog-id=65 op=LOAD
Oct  2 19:43:59.471000 audit: BPF prog-id=47 op=UNLOAD
Oct  2 19:43:59.472000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.472000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.472000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.472000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.472000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.472000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.472000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.472000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.472000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit: BPF prog-id=66 op=LOAD
Oct  2 19:43:59.473000 audit: BPF prog-id=48 op=UNLOAD
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit: BPF prog-id=67 op=LOAD
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.473000 audit: BPF prog-id=68 op=LOAD
Oct  2 19:43:59.473000 audit: BPF prog-id=49 op=UNLOAD
Oct  2 19:43:59.473000 audit: BPF prog-id=50 op=UNLOAD
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.474000 audit: BPF prog-id=69 op=LOAD
Oct  2 19:43:59.475000 audit: BPF prog-id=51 op=UNLOAD
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.477000 audit: BPF prog-id=70 op=LOAD
Oct  2 19:43:59.477000 audit: BPF prog-id=52 op=UNLOAD
Oct  2 19:43:59.478000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.478000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.478000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.478000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.478000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.478000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.478000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.478000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.478000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit: BPF prog-id=71 op=LOAD
Oct  2 19:43:59.479000 audit: BPF prog-id=53 op=UNLOAD
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit: BPF prog-id=72 op=LOAD
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit: BPF prog-id=73 op=LOAD
Oct  2 19:43:59.479000 audit: BPF prog-id=54 op=UNLOAD
Oct  2 19:43:59.479000 audit: BPF prog-id=55 op=UNLOAD
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.479000 audit: BPF prog-id=74 op=LOAD
Oct  2 19:43:59.479000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.480000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.480000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.480000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.480000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.480000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.480000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.480000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.480000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:43:59.480000 audit: BPF prog-id=75 op=LOAD
Oct  2 19:43:59.480000 audit: BPF prog-id=56 op=UNLOAD
Oct  2 19:43:59.480000 audit: BPF prog-id=57 op=UNLOAD
Oct  2 19:43:59.500902 systemd[1]: Started kubelet.service.
Oct  2 19:43:59.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:43:59.553521 kubelet[2091]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote'
Oct  2 19:43:59.553521 kubelet[2091]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Oct  2 19:43:59.553521 kubelet[2091]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Oct  2 19:43:59.554129 kubelet[2091]: I1002 19:43:59.553590    2091 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Oct  2 19:43:59.555252 kubelet[2091]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote'
Oct  2 19:43:59.555252 kubelet[2091]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Oct  2 19:43:59.555252 kubelet[2091]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Oct  2 19:44:00.599033 kubelet[2091]: I1002 19:44:00.598994    2091 server.go:413] "Kubelet version" kubeletVersion="v1.25.10"
Oct  2 19:44:00.599033 kubelet[2091]: I1002 19:44:00.599032    2091 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Oct  2 19:44:00.599929 kubelet[2091]: I1002 19:44:00.599704    2091 server.go:825] "Client rotation is on, will bootstrap in background"
Oct  2 19:44:00.607027 kubelet[2091]: I1002 19:44:00.606991    2091 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Oct  2 19:44:00.610225 kubelet[2091]: I1002 19:44:00.610183    2091 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Oct  2 19:44:00.612032 kubelet[2091]: I1002 19:44:00.611933    2091 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Oct  2 19:44:00.612168 kubelet[2091]: I1002 19:44:00.612115    2091 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Oct  2 19:44:00.612168 kubelet[2091]: I1002 19:44:00.612143    2091 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Oct  2 19:44:00.612168 kubelet[2091]: I1002 19:44:00.612160    2091 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true
Oct  2 19:44:00.612402 kubelet[2091]: I1002 19:44:00.612297    2091 state_mem.go:36] "Initialized new in-memory state store"
Oct  2 19:44:00.620140 kubelet[2091]: I1002 19:44:00.620115    2091 kubelet.go:381] "Attempting to sync node with API server"
Oct  2 19:44:00.620140 kubelet[2091]: I1002 19:44:00.620145    2091 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests"
Oct  2 19:44:00.620317 kubelet[2091]: I1002 19:44:00.620168    2091 kubelet.go:281] "Adding apiserver pod source"
Oct  2 19:44:00.620317 kubelet[2091]: I1002 19:44:00.620186    2091 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Oct  2 19:44:00.621869 kubelet[2091]: E1002 19:44:00.621846    2091 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:00.623199 kubelet[2091]: E1002 19:44:00.623177    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:00.623324 kubelet[2091]: I1002 19:44:00.623185    2091 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Oct  2 19:44:00.623919 kubelet[2091]: W1002 19:44:00.623902    2091 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Oct  2 19:44:00.626097 kubelet[2091]: I1002 19:44:00.626074    2091 server.go:1175] "Started kubelet"
Oct  2 19:44:00.629276 kubelet[2091]: E1002 19:44:00.629255    2091 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Oct  2 19:44:00.629593 kubelet[2091]: E1002 19:44:00.629579    2091 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Oct  2 19:44:00.629000 audit[2091]: AVC avc:  denied  { mac_admin } for  pid=2091 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:00.629000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Oct  2 19:44:00.629000 audit[2091]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bd29f0 a1=c000bba8b8 a2=c000bd29c0 a3=25 items=0 ppid=1 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.629000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Oct  2 19:44:00.631218 kubelet[2091]: I1002 19:44:00.631200    2091 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument"
Oct  2 19:44:00.630000 audit[2091]: AVC avc:  denied  { mac_admin } for  pid=2091 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:00.630000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Oct  2 19:44:00.630000 audit[2091]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000df8960 a1=c000bba8d0 a2=c000bd2a80 a3=25 items=0 ppid=1 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.630000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Oct  2 19:44:00.632152 kubelet[2091]: I1002 19:44:00.632081    2091 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument"
Oct  2 19:44:00.632412 kubelet[2091]: I1002 19:44:00.632398    2091 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Oct  2 19:44:00.632726 kubelet[2091]: I1002 19:44:00.632711    2091 server.go:155] "Starting to listen" address="0.0.0.0" port=10250
Oct  2 19:44:00.633428 kubelet[2091]: I1002 19:44:00.633401    2091 server.go:438] "Adding debug handlers to kubelet server"
Oct  2 19:44:00.640122 kubelet[2091]: I1002 19:44:00.640079    2091 volume_manager.go:293] "Starting Kubelet Volume Manager"
Oct  2 19:44:00.640243 kubelet[2091]: I1002 19:44:00.640182    2091 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Oct  2 19:44:00.641904 kubelet[2091]: E1002 19:44:00.641882    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:00.699684 kubelet[2091]: E1002 19:44:00.699649    2091 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.22.191" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Oct  2 19:44:00.703203 kubelet[2091]: W1002 19:44:00.703163    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Oct  2 19:44:00.703350 kubelet[2091]: E1002 19:44:00.703214    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Oct  2 19:44:00.703350 kubelet[2091]: W1002 19:44:00.703268    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Oct  2 19:44:00.703350 kubelet[2091]: E1002 19:44:00.703280    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Oct  2 19:44:00.703350 kubelet[2091]: W1002 19:44:00.703312    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.22.191" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Oct  2 19:44:00.703350 kubelet[2091]: E1002 19:44:00.703323    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.22.191" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Oct  2 19:44:00.703839 kubelet[2091]: E1002 19:44:00.703379    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3c98fa887", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 625518727, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 625518727, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.716241 kubelet[2091]: E1002 19:44:00.712919    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3c9cd612f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 629563695, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 629563695, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.724000 audit[2106]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.724000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff9a5bf360 a2=0 a3=7fff9a5bf34c items=0 ppid=2091 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65
Oct  2 19:44:00.727077 kubelet[2091]: I1002 19:44:00.727058    2091 cpu_manager.go:213] "Starting CPU manager" policy="none"
Oct  2 19:44:00.727198 kubelet[2091]: I1002 19:44:00.727190    2091 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
Oct  2 19:44:00.727249 kubelet[2091]: I1002 19:44:00.727244    2091 state_mem.go:36] "Initialized new in-memory state store"
Oct  2 19:44:00.727000 audit[2110]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.727000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc25fc0cb0 a2=0 a3=7ffc25fc0c9c items=0 ppid=2091 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572
Oct  2 19:44:00.730081 kubelet[2091]: I1002 19:44:00.730055    2091 policy_none.go:49] "None policy: Start"
Oct  2 19:44:00.730755 kubelet[2091]: E1002 19:44:00.730657    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e2347", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.191 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726082375, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726082375, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.731684 kubelet[2091]: I1002 19:44:00.731670    2091 memory_manager.go:168] "Starting memorymanager" policy="None"
Oct  2 19:44:00.731768 kubelet[2091]: I1002 19:44:00.731761    2091 state_mem.go:35] "Initializing new in-memory state store"
Oct  2 19:44:00.734745 kubelet[2091]: E1002 19:44:00.734662    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e3811", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.191 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726087697, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726087697, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.740875 systemd[1]: Created slice kubepods.slice.
Oct  2 19:44:00.743932 kubelet[2091]: E1002 19:44:00.743904    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:00.746425 kubelet[2091]: I1002 19:44:00.746401    2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.191"
Oct  2 19:44:00.747222 kubelet[2091]: E1002 19:44:00.747119    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e424a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.191 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726090314, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726090314, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.748169 kubelet[2091]: E1002 19:44:00.748129    2091 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.191"
Oct  2 19:44:00.748695 kubelet[2091]: E1002 19:44:00.748621    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e2347", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.191 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726082375, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 746354532, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e2347" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.732000 audit[2112]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.732000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffbce6bbe0 a2=0 a3=7fffbce6bbcc items=0 ppid=2091 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Oct  2 19:44:00.752784 kubelet[2091]: E1002 19:44:00.751715    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e3811", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.191 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726087697, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 746361457, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e3811" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.754872 kubelet[2091]: E1002 19:44:00.753945    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e424a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.191 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726090314, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 746367822, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e424a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.756226 systemd[1]: Created slice kubepods-burstable.slice.
Oct  2 19:44:00.757000 audit[2118]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2118 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.757000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd4a415120 a2=0 a3=7ffd4a41510c items=0 ppid=2091 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C
Oct  2 19:44:00.761333 systemd[1]: Created slice kubepods-besteffort.slice.
Oct  2 19:44:00.772016 kubelet[2091]: I1002 19:44:00.771988    2091 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Oct  2 19:44:00.772242 kubelet[2091]: I1002 19:44:00.772229    2091 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument"
Oct  2 19:44:00.772719 kubelet[2091]: I1002 19:44:00.772702    2091 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Oct  2 19:44:00.771000 audit[2091]: AVC avc:  denied  { mac_admin } for  pid=2091 comm="kubelet" capability=33  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:00.771000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0"
Oct  2 19:44:00.771000 audit[2091]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f52de0 a1=c000d6f4a0 a2=c000f52db0 a3=25 items=0 ppid=1 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.771000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669
Oct  2 19:44:00.775371 kubelet[2091]: E1002 19:44:00.775335    2091 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.191\" not found"
Oct  2 19:44:00.779664 kubelet[2091]: E1002 19:44:00.779537    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3d291ac94", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 776645780, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 776645780, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.820000 audit[2123]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.820000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc415910f0 a2=0 a3=7ffc415910dc items=0 ppid=2091 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38
Oct  2 19:44:00.822000 audit[2124]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.822000 audit[2124]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc39fe7ae0 a2=0 a3=7ffc39fe7acc items=0 ppid=2091 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174
Oct  2 19:44:00.831000 audit[2127]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.831000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffee3a72f90 a2=0 a3=7ffee3a72f7c items=0 ppid=2091 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030
Oct  2 19:44:00.836000 audit[2130]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.836000 audit[2130]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffcd3169190 a2=0 a3=7ffcd316917c items=0 ppid=2091 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B
Oct  2 19:44:00.838000 audit[2131]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.838000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff8a26f780 a2=0 a3=7fff8a26f76c items=0 ppid=2091 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174
Oct  2 19:44:00.841000 audit[2132]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.841000 audit[2132]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedb291240 a2=0 a3=7ffedb29122c items=0 ppid=2091 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.841000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174
Oct  2 19:44:00.844136 kubelet[2091]: E1002 19:44:00.844094    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:00.845000 audit[2134]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.845000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdd293f9d0 a2=0 a3=7ffdd293f9bc items=0 ppid=2091 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030
Oct  2 19:44:00.848000 audit[2136]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.848000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcc85bcde0 a2=0 a3=7ffcc85bcdcc items=0 ppid=2091 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47
Oct  2 19:44:00.878000 audit[2139]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.878000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fff4c5c9b80 a2=0 a3=7fff4c5c9b6c items=0 ppid=2091 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E
Oct  2 19:44:00.881000 audit[2141]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.881000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffedc9aec00 a2=0 a3=7ffedc9aebec items=0 ppid=2091 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030
Oct  2 19:44:00.896000 audit[2144]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.896000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7fff0f1d8250 a2=0 a3=7fff0f1d823c items=0 ppid=2091 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445
Oct  2 19:44:00.898866 kubelet[2091]: I1002 19:44:00.897998    2091 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Oct  2 19:44:00.899000 audit[2146]: NETFILTER_CFG table=mangle:17 family=2 entries=1 op=nft_register_chain pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.899000 audit[2146]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe75451f10 a2=0 a3=7ffe75451efc items=0 ppid=2091 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65
Oct  2 19:44:00.900000 audit[2145]: NETFILTER_CFG table=mangle:18 family=10 entries=2 op=nft_register_chain pid=2145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.900000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffc7f317d0 a2=0 a3=7fffc7f317bc items=0 ppid=2091 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65
Oct  2 19:44:00.901636 kubelet[2091]: E1002 19:44:00.901615    2091 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.22.191" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Oct  2 19:44:00.902000 audit[2147]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_chain pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.902000 audit[2147]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff32c802f0 a2=0 a3=7fff32c802dc items=0 ppid=2091 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.902000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174
Oct  2 19:44:00.902000 audit[2148]: NETFILTER_CFG table=nat:20 family=10 entries=2 op=nft_register_chain pid=2148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.902000 audit[2148]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc5d499f70 a2=0 a3=7ffc5d499f5c items=0 ppid=2091 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174
Oct  2 19:44:00.904000 audit[2149]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:00.904000 audit[2149]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7a4e5ed0 a2=0 a3=7ffd7a4e5ebc items=0 ppid=2091 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572
Oct  2 19:44:00.906000 audit[2151]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.906000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc53368c70 a2=0 a3=7ffc53368c5c items=0 ppid=2091 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.906000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030
Oct  2 19:44:00.907000 audit[2152]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.907000 audit[2152]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc75076fa0 a2=0 a3=7ffc75076f8c items=0 ppid=2091 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572
Oct  2 19:44:00.911000 audit[2154]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.911000 audit[2154]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffde65ae6b0 a2=0 a3=7ffde65ae69c items=0 ppid=2091 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B
Oct  2 19:44:00.912000 audit[2155]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.912000 audit[2155]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce8ab6e70 a2=0 a3=7ffce8ab6e5c items=0 ppid=2091 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174
Oct  2 19:44:00.914000 audit[2156]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.914000 audit[2156]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc5fef080 a2=0 a3=7ffcc5fef06c items=0 ppid=2091 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174
Oct  2 19:44:00.918000 audit[2158]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.918000 audit[2158]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd499bf670 a2=0 a3=7ffd499bf65c items=0 ppid=2091 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.918000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030
Oct  2 19:44:00.920000 audit[2160]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.920000 audit[2160]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc9610c0c0 a2=0 a3=7ffc9610c0ac items=0 ppid=2091 pid=2160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47
Oct  2 19:44:00.924000 audit[2162]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.924000 audit[2162]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc4b137e10 a2=0 a3=7ffc4b137dfc items=0 ppid=2091 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E
Oct  2 19:44:00.928000 audit[2164]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.928000 audit[2164]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc0e4b26d0 a2=0 a3=7ffc0e4b26bc items=0 ppid=2091 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030
Oct  2 19:44:00.934000 audit[2166]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.934000 audit[2166]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffe9fc64450 a2=0 a3=7ffe9fc6443c items=0 ppid=2091 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445
Oct  2 19:44:00.936564 kubelet[2091]: I1002 19:44:00.936383    2091 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Oct  2 19:44:00.936724 kubelet[2091]: I1002 19:44:00.936641    2091 status_manager.go:161] "Starting to sync pod status with apiserver"
Oct  2 19:44:00.936724 kubelet[2091]: I1002 19:44:00.936669    2091 kubelet.go:2010] "Starting kubelet main sync loop"
Oct  2 19:44:00.936818 kubelet[2091]: E1002 19:44:00.936726    2091 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Oct  2 19:44:00.942106 kubelet[2091]: W1002 19:44:00.941340    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Oct  2 19:44:00.942450 kubelet[2091]: E1002 19:44:00.942389    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Oct  2 19:44:00.942000 audit[2167]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.942000 audit[2167]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee413d3d0 a2=0 a3=7ffee413d3bc items=0 ppid=2091 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65
Oct  2 19:44:00.944366 kubelet[2091]: E1002 19:44:00.944306    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:00.946000 audit[2168]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.946000 audit[2168]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3f5ec920 a2=0 a3=7ffc3f5ec90c items=0 ppid=2091 pid=2168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174
Oct  2 19:44:00.955000 audit[2169]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:00.955000 audit[2169]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec67e1e10 a2=0 a3=7ffec67e1dfc items=0 ppid=2091 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:00.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572
Oct  2 19:44:00.957388 kubelet[2091]: I1002 19:44:00.957357    2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.191"
Oct  2 19:44:00.959327 kubelet[2091]: E1002 19:44:00.959300    2091 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.191"
Oct  2 19:44:00.959525 kubelet[2091]: E1002 19:44:00.959305    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e2347", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.191 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726082375, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 957310492, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e2347" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:00.961035 kubelet[2091]: E1002 19:44:00.960911    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e3811", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.191 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726087697, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 957319070, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e3811" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:01.030683 kubelet[2091]: E1002 19:44:01.030592    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e424a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.191 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726090314, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 957323470, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e424a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:01.045452 kubelet[2091]: E1002 19:44:01.045407    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:01.146432 kubelet[2091]: E1002 19:44:01.146322    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:01.247071 kubelet[2091]: E1002 19:44:01.247025    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:01.304321 kubelet[2091]: E1002 19:44:01.304278    2091 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.22.191" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Oct  2 19:44:01.348215 kubelet[2091]: E1002 19:44:01.348167    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:01.361229 kubelet[2091]: I1002 19:44:01.361198    2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.191"
Oct  2 19:44:01.362611 kubelet[2091]: E1002 19:44:01.362577    2091 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.191"
Oct  2 19:44:01.362850 kubelet[2091]: E1002 19:44:01.362762    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e2347", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.191 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726082375, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 1, 361112939, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e2347" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:01.429735 kubelet[2091]: E1002 19:44:01.429635    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e3811", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.191 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726087697, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 1, 361128264, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e3811" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:01.449292 kubelet[2091]: E1002 19:44:01.449240    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:01.549816 kubelet[2091]: E1002 19:44:01.549771    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:01.623622 kubelet[2091]: E1002 19:44:01.623567    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:01.629532 kubelet[2091]: E1002 19:44:01.629408    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e424a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.191 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726090314, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 1, 361132515, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e424a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:01.650223 kubelet[2091]: E1002 19:44:01.650171    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:01.667998 kubelet[2091]: W1002 19:44:01.667759    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Oct  2 19:44:01.667998 kubelet[2091]: E1002 19:44:01.668003    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Oct  2 19:44:01.751459 kubelet[2091]: E1002 19:44:01.751275    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:01.852130 kubelet[2091]: E1002 19:44:01.851932    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:01.928027 kubelet[2091]: W1002 19:44:01.927916    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Oct  2 19:44:01.928027 kubelet[2091]: E1002 19:44:01.928027    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Oct  2 19:44:01.952985 kubelet[2091]: E1002 19:44:01.952814    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.053840 kubelet[2091]: E1002 19:44:02.053381    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.106036 kubelet[2091]: E1002 19:44:02.105991    2091 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.22.191" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Oct  2 19:44:02.155015 kubelet[2091]: E1002 19:44:02.154645    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.163853 kubelet[2091]: I1002 19:44:02.163823    2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.191"
Oct  2 19:44:02.165436 kubelet[2091]: E1002 19:44:02.165264    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e2347", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.191 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726082375, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 2, 163771877, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e2347" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:02.165694 kubelet[2091]: E1002 19:44:02.165399    2091 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.191"
Oct  2 19:44:02.166909 kubelet[2091]: E1002 19:44:02.166824    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e3811", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.191 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726087697, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 2, 163785380, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e3811" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:02.225578 kubelet[2091]: W1002 19:44:02.225544    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.22.191" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Oct  2 19:44:02.225578 kubelet[2091]: E1002 19:44:02.225582    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.22.191" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Oct  2 19:44:02.231078 kubelet[2091]: E1002 19:44:02.230838    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e424a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.191 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726090314, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 2, 163789549, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e424a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:02.255263 kubelet[2091]: E1002 19:44:02.255218    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.355746 kubelet[2091]: E1002 19:44:02.355608    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.456358 kubelet[2091]: E1002 19:44:02.456110    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.474328 kubelet[2091]: W1002 19:44:02.474298    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Oct  2 19:44:02.474328 kubelet[2091]: E1002 19:44:02.474338    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Oct  2 19:44:02.557663 kubelet[2091]: E1002 19:44:02.557578    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.624400 kubelet[2091]: E1002 19:44:02.624192    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:02.658807 kubelet[2091]: E1002 19:44:02.658751    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.759753 kubelet[2091]: E1002 19:44:02.759712    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.860711 kubelet[2091]: E1002 19:44:02.860660    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:02.961924 kubelet[2091]: E1002 19:44:02.961654    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.062587 kubelet[2091]: E1002 19:44:03.062465    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.163162 kubelet[2091]: E1002 19:44:03.163111    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.263953 kubelet[2091]: E1002 19:44:03.263838    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.364923 kubelet[2091]: E1002 19:44:03.364876    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.465774 kubelet[2091]: E1002 19:44:03.465727    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.566567 kubelet[2091]: E1002 19:44:03.566437    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.625100 kubelet[2091]: E1002 19:44:03.625050    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:03.645126 kubelet[2091]: W1002 19:44:03.645086    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Oct  2 19:44:03.645126 kubelet[2091]: E1002 19:44:03.645127    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Oct  2 19:44:03.667435 kubelet[2091]: E1002 19:44:03.667386    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.707398 kubelet[2091]: E1002 19:44:03.707351    2091 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.22.191" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Oct  2 19:44:03.766604 kubelet[2091]: I1002 19:44:03.766567    2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.191"
Oct  2 19:44:03.767618 kubelet[2091]: E1002 19:44:03.767589    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.768063 kubelet[2091]: E1002 19:44:03.768042    2091 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.191"
Oct  2 19:44:03.768634 kubelet[2091]: E1002 19:44:03.768559    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e2347", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.191 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726082375, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 3, 766518977, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e2347" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:03.769804 kubelet[2091]: E1002 19:44:03.769730    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e3811", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.191 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726087697, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 3, 766531174, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e3811" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:03.771292 kubelet[2091]: E1002 19:44:03.771219    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e424a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.191 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726090314, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 3, 766535472, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e424a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:03.867936 kubelet[2091]: E1002 19:44:03.867817    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:03.968953 kubelet[2091]: E1002 19:44:03.968904    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.069599 kubelet[2091]: E1002 19:44:04.069550    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.170552 kubelet[2091]: E1002 19:44:04.170460    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.271055 kubelet[2091]: E1002 19:44:04.271010    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.371603 kubelet[2091]: E1002 19:44:04.371560    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.472283 kubelet[2091]: E1002 19:44:04.472163    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.573046 kubelet[2091]: E1002 19:44:04.572997    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.625609 kubelet[2091]: E1002 19:44:04.625557    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:04.674127 kubelet[2091]: E1002 19:44:04.674083    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.729326 kubelet[2091]: W1002 19:44:04.729212    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.22.191" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Oct  2 19:44:04.729326 kubelet[2091]: E1002 19:44:04.729250    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.22.191" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Oct  2 19:44:04.774651 kubelet[2091]: E1002 19:44:04.774604    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.843339 kubelet[2091]: W1002 19:44:04.843296    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Oct  2 19:44:04.843473 kubelet[2091]: E1002 19:44:04.843352    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Oct  2 19:44:04.875729 kubelet[2091]: E1002 19:44:04.875677    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:04.976772 kubelet[2091]: E1002 19:44:04.976728    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.059564 kubelet[2091]: W1002 19:44:05.059446    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Oct  2 19:44:05.059564 kubelet[2091]: E1002 19:44:05.059494    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Oct  2 19:44:05.077906 kubelet[2091]: E1002 19:44:05.077856    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.178720 kubelet[2091]: E1002 19:44:05.178670    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.279435 kubelet[2091]: E1002 19:44:05.279390    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.380161 kubelet[2091]: E1002 19:44:05.380038    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.480784 kubelet[2091]: E1002 19:44:05.480738    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.581548 kubelet[2091]: E1002 19:44:05.581503    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.626096 kubelet[2091]: E1002 19:44:05.626040    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:05.682028 kubelet[2091]: E1002 19:44:05.681979    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.773732 kubelet[2091]: E1002 19:44:05.773701    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:05.782399 kubelet[2091]: E1002 19:44:05.782356    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.883073 kubelet[2091]: E1002 19:44:05.883025    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:05.984008 kubelet[2091]: E1002 19:44:05.983838    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.084630 kubelet[2091]: E1002 19:44:06.084433    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.185235 kubelet[2091]: E1002 19:44:06.184977    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.285558 kubelet[2091]: E1002 19:44:06.285429    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.386117 kubelet[2091]: E1002 19:44:06.386063    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.486649 kubelet[2091]: E1002 19:44:06.486605    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.587711 kubelet[2091]: E1002 19:44:06.587589    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.627176 kubelet[2091]: E1002 19:44:06.627072    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:06.688666 kubelet[2091]: E1002 19:44:06.688624    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.789238 kubelet[2091]: E1002 19:44:06.789193    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.889911 kubelet[2091]: E1002 19:44:06.889794    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:06.909362 kubelet[2091]: E1002 19:44:06.909325    2091 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.22.191" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Oct  2 19:44:06.974525 kubelet[2091]: I1002 19:44:06.971035    2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.191"
Oct  2 19:44:06.974525 kubelet[2091]: E1002 19:44:06.972301    2091 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.191"
Oct  2 19:44:06.974525 kubelet[2091]: E1002 19:44:06.972428    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e2347", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.191 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726082375, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 6, 970982761, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e2347" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:06.978139 kubelet[2091]: E1002 19:44:06.978018    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e3811", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.191 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726087697, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 6, 970990356, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e3811" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:06.980373 kubelet[2091]: E1002 19:44:06.980263    2091 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.191.178a61e3cf8e424a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.191", UID:"172.31.22.191", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.191 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.191"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 44, 0, 726090314, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 44, 6, 970995891, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.22.191.178a61e3cf8e424a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Oct  2 19:44:06.989989 kubelet[2091]: E1002 19:44:06.989941    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.090618 kubelet[2091]: E1002 19:44:07.090569    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.190698 kubelet[2091]: E1002 19:44:07.190660    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.291682 kubelet[2091]: E1002 19:44:07.291540    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.392473 kubelet[2091]: E1002 19:44:07.392206    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.493103 kubelet[2091]: E1002 19:44:07.492983    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.593665 kubelet[2091]: E1002 19:44:07.593620    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.628200 kubelet[2091]: E1002 19:44:07.628149    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:07.670339 kubelet[2091]: W1002 19:44:07.670074    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Oct  2 19:44:07.670696 kubelet[2091]: E1002 19:44:07.670353    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Oct  2 19:44:07.694236 kubelet[2091]: E1002 19:44:07.694195    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.795313 kubelet[2091]: E1002 19:44:07.795209    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.895857 kubelet[2091]: E1002 19:44:07.895807    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:07.996001 kubelet[2091]: E1002 19:44:07.995956    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:08.096658 kubelet[2091]: E1002 19:44:08.096543    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:08.164628 kubelet[2091]: W1002 19:44:08.164590    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Oct  2 19:44:08.164628 kubelet[2091]: E1002 19:44:08.164628    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Oct  2 19:44:08.197582 kubelet[2091]: E1002 19:44:08.197439    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:08.298198 kubelet[2091]: E1002 19:44:08.298150    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:08.399165 kubelet[2091]: E1002 19:44:08.399132    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:08.499908 kubelet[2091]: E1002 19:44:08.499855    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:08.600474 kubelet[2091]: E1002 19:44:08.600426    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:08.629247 kubelet[2091]: E1002 19:44:08.629198    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:08.701420 kubelet[2091]: E1002 19:44:08.700936    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:08.801242 kubelet[2091]: E1002 19:44:08.801195    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:08.901892 kubelet[2091]: E1002 19:44:08.901846    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.002285 kubelet[2091]: E1002 19:44:09.002103    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.102821 kubelet[2091]: E1002 19:44:09.102729    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.203704 kubelet[2091]: E1002 19:44:09.203648    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.304599 kubelet[2091]: E1002 19:44:09.304382    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.405297 kubelet[2091]: E1002 19:44:09.405247    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.506123 kubelet[2091]: E1002 19:44:09.505988    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.606598 kubelet[2091]: E1002 19:44:09.606466    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.630048 kubelet[2091]: E1002 19:44:09.630007    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:09.706827 kubelet[2091]: E1002 19:44:09.706784    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.807666 kubelet[2091]: E1002 19:44:09.807621    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:09.875093 kubelet[2091]: W1002 19:44:09.874986    2091 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Oct  2 19:44:09.875093 kubelet[2091]: E1002 19:44:09.875025    2091 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Oct  2 19:44:09.908348 kubelet[2091]: E1002 19:44:09.908298    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.009365 kubelet[2091]: E1002 19:44:10.009322    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.110386 kubelet[2091]: E1002 19:44:10.110346    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.211056 kubelet[2091]: E1002 19:44:10.211013    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.311781 kubelet[2091]: E1002 19:44:10.311738    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.412376 kubelet[2091]: E1002 19:44:10.412333    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.513377 kubelet[2091]: E1002 19:44:10.513273    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.603886 kubelet[2091]: I1002 19:44:10.603832    2091 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials"
Oct  2 19:44:10.614178 kubelet[2091]: E1002 19:44:10.614135    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.630471 kubelet[2091]: E1002 19:44:10.630424    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:10.715010 kubelet[2091]: E1002 19:44:10.714967    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.775087 kubelet[2091]: E1002 19:44:10.774979    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:10.776189 kubelet[2091]: E1002 19:44:10.776157    2091 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.191\" not found"
Oct  2 19:44:10.815784 kubelet[2091]: E1002 19:44:10.815745    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:10.916397 kubelet[2091]: E1002 19:44:10.916347    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.016755 kubelet[2091]: E1002 19:44:11.016715    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.058968 kubelet[2091]: E1002 19:44:11.058826    2091 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.22.191" not found
Oct  2 19:44:11.117817 kubelet[2091]: E1002 19:44:11.117772    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.218623 kubelet[2091]: E1002 19:44:11.218574    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.319221 kubelet[2091]: E1002 19:44:11.319114    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.419821 kubelet[2091]: E1002 19:44:11.419776    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.520151 kubelet[2091]: E1002 19:44:11.520114    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.621326 kubelet[2091]: E1002 19:44:11.621215    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.629438 kubelet[2091]: I1002 19:44:11.629394    2091 apiserver.go:52] "Watching apiserver"
Oct  2 19:44:11.631625 kubelet[2091]: E1002 19:44:11.631588    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:11.721946 kubelet[2091]: E1002 19:44:11.721900    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.822721 kubelet[2091]: E1002 19:44:11.822671    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:11.923226 kubelet[2091]: E1002 19:44:11.923179    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.019315 kubelet[2091]: I1002 19:44:12.019266    2091 reconciler.go:169] "Reconciler: start to sync state"
Oct  2 19:44:12.023611 kubelet[2091]: E1002 19:44:12.023474    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.123642 kubelet[2091]: E1002 19:44:12.123598    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.224407 kubelet[2091]: E1002 19:44:12.224289    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.325120 kubelet[2091]: E1002 19:44:12.325071    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.426181 kubelet[2091]: E1002 19:44:12.426131    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.507912 kubelet[2091]: E1002 19:44:12.507810    2091 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.22.191" not found
Oct  2 19:44:12.527174 kubelet[2091]: E1002 19:44:12.527130    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.628212 kubelet[2091]: E1002 19:44:12.628166    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.632424 kubelet[2091]: E1002 19:44:12.632373    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:12.729171 kubelet[2091]: E1002 19:44:12.729125    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.830269 kubelet[2091]: E1002 19:44:12.830160    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:12.930764 kubelet[2091]: E1002 19:44:12.930718    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:13.032214 kubelet[2091]: E1002 19:44:13.032163    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:13.132856 kubelet[2091]: E1002 19:44:13.132664    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:13.233499 kubelet[2091]: E1002 19:44:13.233452    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:13.318400 kubelet[2091]: E1002 19:44:13.318355    2091 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.22.191\" not found" node="172.31.22.191"
Oct  2 19:44:13.333912 kubelet[2091]: E1002 19:44:13.333871    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:13.373535 kubelet[2091]: I1002 19:44:13.373511    2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.191"
Oct  2 19:44:13.435007 kubelet[2091]: E1002 19:44:13.434966    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:13.535759 kubelet[2091]: E1002 19:44:13.535715    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:13.632590 kubelet[2091]: E1002 19:44:13.632550    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:13.636888 kubelet[2091]: E1002 19:44:13.636841    2091 kubelet.go:2448] "Error getting node" err="node \"172.31.22.191\" not found"
Oct  2 19:44:13.708325 kubelet[2091]: I1002 19:44:13.708136    2091 kubelet_node_status.go:73] "Successfully registered node" node="172.31.22.191"
Oct  2 19:44:13.737713 kubelet[2091]: I1002 19:44:13.737673    2091 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Oct  2 19:44:13.738383 env[1632]: time="2023-10-02T19:44:13.738340275Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Oct  2 19:44:13.741433 kubelet[2091]: I1002 19:44:13.741264    2091 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Oct  2 19:44:13.745886 kubelet[2091]: E1002 19:44:13.745856    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:13.756179 kubelet[2091]: I1002 19:44:13.756153    2091 topology_manager.go:205] "Topology Admit Handler"
Oct  2 19:44:13.762356 systemd[1]: Created slice kubepods-besteffort-pod6d6e9996_1c84_44cd_974e_fe7af03700b0.slice.
Oct  2 19:44:13.769951 kubelet[2091]: I1002 19:44:13.769921    2091 topology_manager.go:205] "Topology Admit Handler"
Oct  2 19:44:13.776517 systemd[1]: Created slice kubepods-burstable-pod638fe839_a332_4054_92a8_71460027ef59.slice.
Oct  2 19:44:13.933093 kubelet[2091]: I1002 19:44:13.933054    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d6e9996-1c84-44cd-974e-fe7af03700b0-kube-proxy\") pod \"kube-proxy-7nr29\" (UID: \"6d6e9996-1c84-44cd-974e-fe7af03700b0\") " pod="kube-system/kube-proxy-7nr29"
Oct  2 19:44:13.933284 kubelet[2091]: I1002 19:44:13.933112    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d6e9996-1c84-44cd-974e-fe7af03700b0-lib-modules\") pod \"kube-proxy-7nr29\" (UID: \"6d6e9996-1c84-44cd-974e-fe7af03700b0\") " pod="kube-system/kube-proxy-7nr29"
Oct  2 19:44:13.933284 kubelet[2091]: I1002 19:44:13.933146    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-bpf-maps\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933284 kubelet[2091]: I1002 19:44:13.933172    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-hostproc\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933284 kubelet[2091]: I1002 19:44:13.933205    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-xtables-lock\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933284 kubelet[2091]: I1002 19:44:13.933233    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-host-proc-sys-kernel\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933284 kubelet[2091]: I1002 19:44:13.933258    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-etc-cni-netd\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933594 kubelet[2091]: I1002 19:44:13.933285    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/638fe839-a332-4054-92a8-71460027ef59-cilium-config-path\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933594 kubelet[2091]: I1002 19:44:13.933318    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d6e9996-1c84-44cd-974e-fe7af03700b0-xtables-lock\") pod \"kube-proxy-7nr29\" (UID: \"6d6e9996-1c84-44cd-974e-fe7af03700b0\") " pod="kube-system/kube-proxy-7nr29"
Oct  2 19:44:13.933594 kubelet[2091]: I1002 19:44:13.933351    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cilium-run\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933594 kubelet[2091]: I1002 19:44:13.933382    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cilium-cgroup\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933594 kubelet[2091]: I1002 19:44:13.933414    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/638fe839-a332-4054-92a8-71460027ef59-clustermesh-secrets\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933594 kubelet[2091]: I1002 19:44:13.933445    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-host-proc-sys-net\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933847 kubelet[2091]: I1002 19:44:13.933504    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/638fe839-a332-4054-92a8-71460027ef59-hubble-tls\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933847 kubelet[2091]: I1002 19:44:13.933541    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnj9q\" (UniqueName: \"kubernetes.io/projected/638fe839-a332-4054-92a8-71460027ef59-kube-api-access-dnj9q\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933847 kubelet[2091]: I1002 19:44:13.933575    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqc9\" (UniqueName: \"kubernetes.io/projected/6d6e9996-1c84-44cd-974e-fe7af03700b0-kube-api-access-fgqc9\") pod \"kube-proxy-7nr29\" (UID: \"6d6e9996-1c84-44cd-974e-fe7af03700b0\") " pod="kube-system/kube-proxy-7nr29"
Oct  2 19:44:13.933847 kubelet[2091]: I1002 19:44:13.933605    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cni-path\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:13.933847 kubelet[2091]: I1002 19:44:13.933658    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-lib-modules\") pod \"cilium-xpqsm\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") " pod="kube-system/cilium-xpqsm"
Oct  2 19:44:14.025000 audit[1891]: USER_END pid=1891 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:44:14.026412 sudo[1891]: pam_unix(sudo:session): session closed for user root
Oct  2 19:44:14.028163 kernel: kauditd_printk_skb: 540 callbacks suppressed
Oct  2 19:44:14.028216 kernel: audit: type=1106 audit(1696275854.025:640): pid=1891 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:44:14.028000 audit[1891]: CRED_DISP pid=1891 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:44:14.044544 kernel: audit: type=1104 audit(1696275854.028:641): pid=1891 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Oct  2 19:44:14.060591 sshd[1888]: pam_unix(sshd:session): session closed for user core
Oct  2 19:44:14.062000 audit[1888]: USER_END pid=1888 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:44:14.065013 systemd[1]: sshd@6-172.31.22.191:22-139.178.89.65:55632.service: Deactivated successfully.
Oct  2 19:44:14.066068 systemd[1]: session-7.scope: Deactivated successfully.
Oct  2 19:44:14.068321 systemd-logind[1622]: Session 7 logged out. Waiting for processes to exit.
Oct  2 19:44:14.070215 systemd-logind[1622]: Removed session 7.
Oct  2 19:44:14.062000 audit[1888]: CRED_DISP pid=1888 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:44:14.078181 kernel: audit: type=1106 audit(1696275854.062:642): pid=1888 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:44:14.078322 kernel: audit: type=1104 audit(1696275854.062:643): pid=1888 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success'
Oct  2 19:44:14.078363 kernel: audit: type=1131 audit(1696275854.062:644): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.191:22-139.178.89.65:55632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:44:14.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.191:22-139.178.89.65:55632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:44:14.633854 kubelet[2091]: E1002 19:44:14.633805    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:14.903937 kubelet[2091]: I1002 19:44:14.903900    2091 request.go:690] Waited for 1.133555267s due to client-side throttling, not priority and fairness, request: GET:https://172.31.19.18:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dcilium-clustermesh&limit=500&resourceVersion=0
Oct  2 19:44:15.036259 kubelet[2091]: E1002 19:44:15.036212    2091 configmap.go:197] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition
Oct  2 19:44:15.036440 kubelet[2091]: E1002 19:44:15.036327    2091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/638fe839-a332-4054-92a8-71460027ef59-cilium-config-path podName:638fe839-a332-4054-92a8-71460027ef59 nodeName:}" failed. No retries permitted until 2023-10-02 19:44:15.536302774 +0000 UTC m=+16.030392619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/638fe839-a332-4054-92a8-71460027ef59-cilium-config-path") pod "cilium-xpqsm" (UID: "638fe839-a332-4054-92a8-71460027ef59") : failed to sync configmap cache: timed out waiting for the condition
Oct  2 19:44:15.583869 env[1632]: time="2023-10-02T19:44:15.583735222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpqsm,Uid:638fe839-a332-4054-92a8-71460027ef59,Namespace:kube-system,Attempt:0,}"
Oct  2 19:44:15.634823 kubelet[2091]: E1002 19:44:15.634773    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:15.776620 kubelet[2091]: E1002 19:44:15.776587    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:15.871723 env[1632]: time="2023-10-02T19:44:15.871246535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7nr29,Uid:6d6e9996-1c84-44cd-974e-fe7af03700b0,Namespace:kube-system,Attempt:0,}"
Oct  2 19:44:16.170251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820517492.mount: Deactivated successfully.
Oct  2 19:44:16.189356 env[1632]: time="2023-10-02T19:44:16.189305179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:16.191107 env[1632]: time="2023-10-02T19:44:16.191064862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:16.195962 env[1632]: time="2023-10-02T19:44:16.195908832Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:16.200517 env[1632]: time="2023-10-02T19:44:16.200377909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:16.203139 env[1632]: time="2023-10-02T19:44:16.203096374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:16.210667 env[1632]: time="2023-10-02T19:44:16.210608966Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:16.217728 env[1632]: time="2023-10-02T19:44:16.217634189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:16.231180 env[1632]: time="2023-10-02T19:44:16.231123777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:16.278853 env[1632]: time="2023-10-02T19:44:16.274679335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct  2 19:44:16.278853 env[1632]: time="2023-10-02T19:44:16.274727767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct  2 19:44:16.278853 env[1632]: time="2023-10-02T19:44:16.274744987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct  2 19:44:16.278853 env[1632]: time="2023-10-02T19:44:16.274896453Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe pid=2182 runtime=io.containerd.runc.v2
Oct  2 19:44:16.294984 env[1632]: time="2023-10-02T19:44:16.294720217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct  2 19:44:16.294984 env[1632]: time="2023-10-02T19:44:16.294884717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct  2 19:44:16.294984 env[1632]: time="2023-10-02T19:44:16.294901980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct  2 19:44:16.299276 env[1632]: time="2023-10-02T19:44:16.295422688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df867b8422093561117db526914d050ef725266518910bf1e2eb918e10683e0f pid=2203 runtime=io.containerd.runc.v2
Oct  2 19:44:16.323618 systemd[1]: Started cri-containerd-1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe.scope.
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.374431 kernel: audit: type=1400 audit(1696275856.363:645): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.374588 kernel: audit: type=1400 audit(1696275856.363:646): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.385715 systemd[1]: Started cri-containerd-df867b8422093561117db526914d050ef725266518910bf1e2eb918e10683e0f.scope.
Oct  2 19:44:16.387083 kernel: audit: type=1400 audit(1696275856.363:647): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.387182 kernel: audit: type=1400 audit(1696275856.363:648): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.392719 kernel: audit: type=1400 audit(1696275856.363:649): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.363000 audit: BPF prog-id=76 op=LOAD
Oct  2 19:44:16.375000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.375000 audit[2204]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2182 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373565653962313830303433323061306334303535323762356465
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=2182 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373565653962313830303433323061306334303535323762356465
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit: BPF prog-id=77 op=LOAD
Oct  2 19:44:16.376000 audit[2204]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c00028b360 items=0 ppid=2182 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373565653962313830303433323061306334303535323762356465
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit: BPF prog-id=78 op=LOAD
Oct  2 19:44:16.376000 audit[2204]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c00028b3a8 items=0 ppid=2182 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373565653962313830303433323061306334303535323762356465
Oct  2 19:44:16.376000 audit: BPF prog-id=78 op=UNLOAD
Oct  2 19:44:16.376000 audit: BPF prog-id=77 op=UNLOAD
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { perfmon } for  pid=2204 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit[2204]: AVC avc:  denied  { bpf } for  pid=2204 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.376000 audit: BPF prog-id=79 op=LOAD
Oct  2 19:44:16.376000 audit[2204]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c00028b7b8 items=0 ppid=2182 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166373565653962313830303433323061306334303535323762356465
Oct  2 19:44:16.405000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.405000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.405000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.405000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.405000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.405000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.405000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.405000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.405000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit: BPF prog-id=80 op=LOAD
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2203 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383637623834323230393335363131313764623532363931346430
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=2203 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383637623834323230393335363131313764623532363931346430
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.406000 audit: BPF prog-id=81 op=LOAD
Oct  2 19:44:16.406000 audit[2221]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c00030d3f0 items=0 ppid=2203 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383637623834323230393335363131313764623532363931346430
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit: BPF prog-id=82 op=LOAD
Oct  2 19:44:16.407000 audit[2221]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c00030d438 items=0 ppid=2203 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383637623834323230393335363131313764623532363931346430
Oct  2 19:44:16.407000 audit: BPF prog-id=82 op=UNLOAD
Oct  2 19:44:16.407000 audit: BPF prog-id=81 op=UNLOAD
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { perfmon } for  pid=2221 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit[2221]: AVC avc:  denied  { bpf } for  pid=2221 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:16.407000 audit: BPF prog-id=83 op=LOAD
Oct  2 19:44:16.407000 audit[2221]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c00030d848 items=0 ppid=2203 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:16.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466383637623834323230393335363131313764623532363931346430
Oct  2 19:44:16.413132 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 19:44:16.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Oct  2 19:44:16.432000 audit: BPF prog-id=68 op=UNLOAD
Oct  2 19:44:16.432000 audit: BPF prog-id=67 op=UNLOAD
Oct  2 19:44:16.432000 audit: BPF prog-id=66 op=UNLOAD
Oct  2 19:44:16.433438 env[1632]: time="2023-10-02T19:44:16.427742863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpqsm,Uid:638fe839-a332-4054-92a8-71460027ef59,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\""
Oct  2 19:44:16.433438 env[1632]: time="2023-10-02T19:44:16.433365533Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\""
Oct  2 19:44:16.439013 env[1632]: time="2023-10-02T19:44:16.438962871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7nr29,Uid:6d6e9996-1c84-44cd-974e-fe7af03700b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"df867b8422093561117db526914d050ef725266518910bf1e2eb918e10683e0f\""
Oct  2 19:44:16.635404 kubelet[2091]: E1002 19:44:16.635357    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:17.635809 kubelet[2091]: E1002 19:44:17.635761    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:18.636152 kubelet[2091]: E1002 19:44:18.636037    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:19.637051 kubelet[2091]: E1002 19:44:19.636976    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:20.620797 kubelet[2091]: E1002 19:44:20.620759    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:20.641008 kubelet[2091]: E1002 19:44:20.640969    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:20.783778 kubelet[2091]: E1002 19:44:20.783751    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:21.641432 kubelet[2091]: E1002 19:44:21.641355    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:22.641698 kubelet[2091]: E1002 19:44:22.641656    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:23.072392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569389293.mount: Deactivated successfully.
Oct  2 19:44:23.644025 kubelet[2091]: E1002 19:44:23.643948    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:24.652315 kubelet[2091]: E1002 19:44:24.652274    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:25.652617 kubelet[2091]: E1002 19:44:25.652557    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:25.785347 kubelet[2091]: E1002 19:44:25.785275    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:26.652883 kubelet[2091]: E1002 19:44:26.652835    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:27.258200 env[1632]: time="2023-10-02T19:44:27.258144916Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:27.260953 env[1632]: time="2023-10-02T19:44:27.260906605Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:27.263259 env[1632]: time="2023-10-02T19:44:27.263218107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:27.263937 env[1632]: time="2023-10-02T19:44:27.263897037Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\""
Oct  2 19:44:27.266361 env[1632]: time="2023-10-02T19:44:27.265781637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\""
Oct  2 19:44:27.266834 env[1632]: time="2023-10-02T19:44:27.266799622Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Oct  2 19:44:27.282153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1030523372.mount: Deactivated successfully.
Oct  2 19:44:27.290513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606978997.mount: Deactivated successfully.
Oct  2 19:44:27.303453 env[1632]: time="2023-10-02T19:44:27.303397442Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\""
Oct  2 19:44:27.304593 env[1632]: time="2023-10-02T19:44:27.304322710Z" level=info msg="StartContainer for \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\""
Oct  2 19:44:27.333680 systemd[1]: Started cri-containerd-785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59.scope.
Oct  2 19:44:27.350170 systemd[1]: cri-containerd-785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59.scope: Deactivated successfully.
Oct  2 19:44:27.545191 env[1632]: time="2023-10-02T19:44:27.544035309Z" level=info msg="shim disconnected" id=785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59
Oct  2 19:44:27.545191 env[1632]: time="2023-10-02T19:44:27.544111691Z" level=warning msg="cleaning up after shim disconnected" id=785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59 namespace=k8s.io
Oct  2 19:44:27.545191 env[1632]: time="2023-10-02T19:44:27.544124020Z" level=info msg="cleaning up dead shim"
Oct  2 19:44:27.564162 env[1632]: time="2023-10-02T19:44:27.564112507Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:44:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2285 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:44:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:44:27.564637 env[1632]: time="2023-10-02T19:44:27.564473996Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed"
Oct  2 19:44:27.566363 env[1632]: time="2023-10-02T19:44:27.566259877Z" level=error msg="Failed to pipe stdout of container \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\"" error="reading from a closed fifo"
Oct  2 19:44:27.566604 env[1632]: time="2023-10-02T19:44:27.566567254Z" level=error msg="Failed to pipe stderr of container \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\"" error="reading from a closed fifo"
Oct  2 19:44:27.571280 env[1632]: time="2023-10-02T19:44:27.571201256Z" level=error msg="StartContainer for \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:44:27.571650 kubelet[2091]: E1002 19:44:27.571586    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59"
Oct  2 19:44:27.571777 kubelet[2091]: E1002 19:44:27.571760    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:44:27.571777 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:44:27.571777 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:44:27.571777 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dnj9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:44:27.572011 kubelet[2091]: E1002 19:44:27.571819    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:44:27.653236 kubelet[2091]: E1002 19:44:27.653195    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:28.010669 env[1632]: time="2023-10-02T19:44:28.010614375Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}"
Oct  2 19:44:28.062887 env[1632]: time="2023-10-02T19:44:28.062831851Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\""
Oct  2 19:44:28.064240 env[1632]: time="2023-10-02T19:44:28.064207316Z" level=info msg="StartContainer for \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\""
Oct  2 19:44:28.113969 systemd[1]: Started cri-containerd-5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb.scope.
Oct  2 19:44:28.142665 systemd[1]: cri-containerd-5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb.scope: Deactivated successfully.
Oct  2 19:44:28.171710 env[1632]: time="2023-10-02T19:44:28.171651741Z" level=info msg="shim disconnected" id=5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb
Oct  2 19:44:28.172180 env[1632]: time="2023-10-02T19:44:28.172153864Z" level=warning msg="cleaning up after shim disconnected" id=5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb namespace=k8s.io
Oct  2 19:44:28.172303 env[1632]: time="2023-10-02T19:44:28.172287665Z" level=info msg="cleaning up dead shim"
Oct  2 19:44:28.190156 env[1632]: time="2023-10-02T19:44:28.190105331Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:44:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2321 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:44:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:44:28.190786 env[1632]: time="2023-10-02T19:44:28.190718160Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed"
Oct  2 19:44:28.191117 env[1632]: time="2023-10-02T19:44:28.191076601Z" level=error msg="Failed to pipe stdout of container \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\"" error="reading from a closed fifo"
Oct  2 19:44:28.191385 env[1632]: time="2023-10-02T19:44:28.191346802Z" level=error msg="Failed to pipe stderr of container \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\"" error="reading from a closed fifo"
Oct  2 19:44:28.193475 env[1632]: time="2023-10-02T19:44:28.193429392Z" level=error msg="StartContainer for \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:44:28.194152 kubelet[2091]: E1002 19:44:28.194130    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb"
Oct  2 19:44:28.194421 kubelet[2091]: E1002 19:44:28.194385    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:44:28.194421 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:44:28.194421 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:44:28.194421 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dnj9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:44:28.194895 kubelet[2091]: E1002 19:44:28.194447    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:44:28.283923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59-rootfs.mount: Deactivated successfully.
Oct  2 19:44:28.573324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691566566.mount: Deactivated successfully.
Oct  2 19:44:28.654753 kubelet[2091]: E1002 19:44:28.654419    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:29.004413 kubelet[2091]: I1002 19:44:29.003741    2091 scope.go:115] "RemoveContainer" containerID="785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59"
Oct  2 19:44:29.004413 kubelet[2091]: I1002 19:44:29.004233    2091 scope.go:115] "RemoveContainer" containerID="785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59"
Oct  2 19:44:29.006423 env[1632]: time="2023-10-02T19:44:29.006381928Z" level=info msg="RemoveContainer for \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\""
Oct  2 19:44:29.010696 env[1632]: time="2023-10-02T19:44:29.010655745Z" level=info msg="RemoveContainer for \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\""
Oct  2 19:44:29.011116 env[1632]: time="2023-10-02T19:44:29.011029927Z" level=error msg="RemoveContainer for \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\" failed" error="rpc error: code = NotFound desc = get container info: container \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\" in namespace \"k8s.io\": not found"
Oct  2 19:44:29.012285 env[1632]: time="2023-10-02T19:44:29.012253947Z" level=info msg="RemoveContainer for \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\" returns successfully"
Oct  2 19:44:29.013636 kubelet[2091]: E1002 19:44:29.012648    2091 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\" in namespace \"k8s.io\": not found" containerID="785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59"
Oct  2 19:44:29.013636 kubelet[2091]: I1002 19:44:29.012720    2091 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59} err="rpc error: code = NotFound desc = get container info: container \"785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59\" in namespace \"k8s.io\": not found"
Oct  2 19:44:29.013636 kubelet[2091]: E1002 19:44:29.013391    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:44:29.233017 env[1632]: time="2023-10-02T19:44:29.232967853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:29.235674 env[1632]: time="2023-10-02T19:44:29.235631405Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:29.238131 env[1632]: time="2023-10-02T19:44:29.238093003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:29.240250 env[1632]: time="2023-10-02T19:44:29.240214848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:44:29.240791 env[1632]: time="2023-10-02T19:44:29.240761514Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\""
Oct  2 19:44:29.243123 env[1632]: time="2023-10-02T19:44:29.243091135Z" level=info msg="CreateContainer within sandbox \"df867b8422093561117db526914d050ef725266518910bf1e2eb918e10683e0f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Oct  2 19:44:29.256863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706092794.mount: Deactivated successfully.
Oct  2 19:44:29.270993 env[1632]: time="2023-10-02T19:44:29.270942493Z" level=info msg="CreateContainer within sandbox \"df867b8422093561117db526914d050ef725266518910bf1e2eb918e10683e0f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48b85e64e22d224002adf78c53ca4d806add7bf440fe89f1f583c9102a977217\""
Oct  2 19:44:29.271675 env[1632]: time="2023-10-02T19:44:29.271516489Z" level=info msg="StartContainer for \"48b85e64e22d224002adf78c53ca4d806add7bf440fe89f1f583c9102a977217\""
Oct  2 19:44:29.282591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282597690.mount: Deactivated successfully.
Oct  2 19:44:29.314078 systemd[1]: Started cri-containerd-48b85e64e22d224002adf78c53ca4d806add7bf440fe89f1f583c9102a977217.scope.
Oct  2 19:44:29.328000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.331271 kernel: kauditd_printk_skb: 113 callbacks suppressed
Oct  2 19:44:29.331365 kernel: audit: type=1400 audit(1696275869.328:685): avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.328000 audit[2341]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=2203 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.342349 kernel: audit: type=1300 audit(1696275869.328:685): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=2203 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.342466 kernel: audit: type=1327 audit(1696275869.328:685): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623835653634653232643232343030326164663738633533636134
Oct  2 19:44:29.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623835653634653232643232343030326164663738633533636134
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.353357 kernel: audit: type=1400 audit(1696275869.332:686): avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.353481 kernel: audit: type=1400 audit(1696275869.332:686): avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.362662 kernel: audit: type=1400 audit(1696275869.332:686): avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.367383 kernel: audit: type=1400 audit(1696275869.332:686): avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.367865 kernel: audit: type=1400 audit(1696275869.332:686): avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.377069 kernel: audit: type=1400 audit(1696275869.332:686): avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.377175 kernel: audit: type=1400 audit(1696275869.332:686): avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.332000 audit: BPF prog-id=84 op=LOAD
Oct  2 19:44:29.332000 audit[2341]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001459d8 a2=78 a3=c0001b47c0 items=0 ppid=2203 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623835653634653232643232343030326164663738633533636134
Oct  2 19:44:29.347000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.347000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.347000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.347000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.347000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.347000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.347000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.347000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.347000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.347000 audit: BPF prog-id=85 op=LOAD
Oct  2 19:44:29.347000 audit[2341]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c0001b4808 items=0 ppid=2203 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623835653634653232643232343030326164663738633533636134
Oct  2 19:44:29.356000 audit: BPF prog-id=85 op=UNLOAD
Oct  2 19:44:29.356000 audit: BPF prog-id=84 op=UNLOAD
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { perfmon } for  pid=2341 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit[2341]: AVC avc:  denied  { bpf } for  pid=2341 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:44:29.356000 audit: BPF prog-id=86 op=LOAD
Oct  2 19:44:29.356000 audit[2341]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c0001b4898 items=0 ppid=2203 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623835653634653232643232343030326164663738633533636134
Oct  2 19:44:29.401153 env[1632]: time="2023-10-02T19:44:29.401107016Z" level=info msg="StartContainer for \"48b85e64e22d224002adf78c53ca4d806add7bf440fe89f1f583c9102a977217\" returns successfully"
Oct  2 19:44:29.451165 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
Oct  2 19:44:29.451294 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes)
Oct  2 19:44:29.451325 kernel: IPVS: ipvs loaded.
Oct  2 19:44:29.462514 kernel: IPVS: [rr] scheduler registered.
Oct  2 19:44:29.471526 kernel: IPVS: [wrr] scheduler registered.
Oct  2 19:44:29.479518 kernel: IPVS: [sh] scheduler registered.
Oct  2 19:44:29.522000 audit[2401]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.522000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa112f1a0 a2=0 a3=7fffa112f18c items=0 ppid=2353 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.522000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65
Oct  2 19:44:29.529000 audit[2402]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.529000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe01c94660 a2=0 a3=7ffe01c9464c items=0 ppid=2353 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65
Oct  2 19:44:29.532000 audit[2403]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.532000 audit[2404]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.532000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc94ece930 a2=0 a3=7ffc94ece91c items=0 ppid=2353 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.532000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174
Oct  2 19:44:29.532000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe23b164e0 a2=0 a3=7ffe23b164cc items=0 ppid=2353 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174
Oct  2 19:44:29.534000 audit[2405]: NETFILTER_CFG table=filter:39 family=10 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.534000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1b111c90 a2=0 a3=7ffe1b111c7c items=0 ppid=2353 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572
Oct  2 19:44:29.535000 audit[2406]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.535000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdefc14c0 a2=0 a3=7ffcdefc14ac items=0 ppid=2353 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.535000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572
Oct  2 19:44:29.537000 audit[2407]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.537000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd21dec7f0 a2=0 a3=7ffd21dec7dc items=0 ppid=2353 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572
Oct  2 19:44:29.541000 audit[2409]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.541000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe39414c60 a2=0 a3=7ffe39414c4c items=0 ppid=2353 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365
Oct  2 19:44:29.545000 audit[2412]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.545000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd59382e20 a2=0 a3=7ffd59382e0c items=0 ppid=2353 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.545000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669
Oct  2 19:44:29.547000 audit[2413]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.547000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0d4f26b0 a2=0 a3=7fff0d4f269c items=0 ppid=2353 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572
Oct  2 19:44:29.550000 audit[2415]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.550000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe22541e80 a2=0 a3=7ffe22541e6c items=0 ppid=2353 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453
Oct  2 19:44:29.551000 audit[2416]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.551000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7f79f8e0 a2=0 a3=7ffd7f79f8cc items=0 ppid=2353 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.551000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572
Oct  2 19:44:29.555000 audit[2418]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.555000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcd3a7c5b0 a2=0 a3=7ffcd3a7c59c items=0 ppid=2353 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.555000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D
Oct  2 19:44:29.559000 audit[2421]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.559000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc2d56b060 a2=0 a3=7ffc2d56b04c items=0 ppid=2353 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53
Oct  2 19:44:29.560000 audit[2422]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.560000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef36eb760 a2=0 a3=7ffef36eb74c items=0 ppid=2353 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572
Oct  2 19:44:29.564000 audit[2424]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.564000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefdda3ce0 a2=0 a3=7ffefdda3ccc items=0 ppid=2353 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244
Oct  2 19:44:29.565000 audit[2425]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.565000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7f42ee90 a2=0 a3=7ffe7f42ee7c items=0 ppid=2353 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572
Oct  2 19:44:29.571000 audit[2427]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.571000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5ea55010 a2=0 a3=7fff5ea54ffc items=0 ppid=2353 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A
Oct  2 19:44:29.595000 audit[2430]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.595000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2a68fd60 a2=0 a3=7ffd2a68fd4c items=0 ppid=2353 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A
Oct  2 19:44:29.600000 audit[2433]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.600000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb3cd3020 a2=0 a3=7fffb3cd300c items=0 ppid=2353 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.600000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D
Oct  2 19:44:29.602000 audit[2434]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.602000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc94d1ec70 a2=0 a3=7ffc94d1ec5c items=0 ppid=2353 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174
Oct  2 19:44:29.605000 audit[2436]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.605000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe373dc330 a2=0 a3=7ffe373dc31c items=0 ppid=2353 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.605000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Oct  2 19:44:29.608000 audit[2439]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables"
Oct  2 19:44:29.608000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc0e9c8170 a2=0 a3=7ffc0e9c815c items=0 ppid=2353 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.608000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Oct  2 19:44:29.620000 audit[2443]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Oct  2 19:44:29.620000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffda640c7e0 a2=0 a3=7ffda640c7cc items=0 ppid=2353 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Oct  2 19:44:29.629000 audit[2443]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor"
Oct  2 19:44:29.629000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffda640c7e0 a2=0 a3=7ffda640c7cc items=0 ppid=2353 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Oct  2 19:44:29.634000 audit[2448]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.634000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc42163660 a2=0 a3=7ffc4216364c items=0 ppid=2353 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572
Oct  2 19:44:29.637000 audit[2450]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.637000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff97705560 a2=0 a3=7fff9770554c items=0 ppid=2353 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.637000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963
Oct  2 19:44:29.642000 audit[2453]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.642000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd80eff8c0 a2=0 a3=7ffd80eff8ac items=0 ppid=2353 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.642000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276
Oct  2 19:44:29.644000 audit[2454]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.644000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa57757c0 a2=0 a3=7fffa57757ac items=0 ppid=2353 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572
Oct  2 19:44:29.650000 audit[2456]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.650000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa2929590 a2=0 a3=7fffa292957c items=0 ppid=2353 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453
Oct  2 19:44:29.652000 audit[2457]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.652000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff32a41110 a2=0 a3=7fff32a410fc items=0 ppid=2353 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.652000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572
Oct  2 19:44:29.654968 kubelet[2091]: E1002 19:44:29.654942    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:29.661000 audit[2459]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.661000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc144d9c90 a2=0 a3=7ffc144d9c7c items=0 ppid=2353 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.661000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245
Oct  2 19:44:29.676000 audit[2462]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.676000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe98c6f4e0 a2=0 a3=7ffe98c6f4cc items=0 ppid=2353 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D
Oct  2 19:44:29.678000 audit[2463]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.678000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0e510690 a2=0 a3=7fff0e51067c items=0 ppid=2353 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.678000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572
Oct  2 19:44:29.681000 audit[2465]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.681000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff4f035340 a2=0 a3=7fff4f03532c items=0 ppid=2353 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244
Oct  2 19:44:29.682000 audit[2466]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.682000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6a0e4bb0 a2=0 a3=7ffc6a0e4b9c items=0 ppid=2353 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572
Oct  2 19:44:29.686000 audit[2468]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.686000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd8bd01770 a2=0 a3=7ffd8bd0175c items=0 ppid=2353 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A
Oct  2 19:44:29.691000 audit[2471]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.691000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0deb2a70 a2=0 a3=7ffe0deb2a5c items=0 ppid=2353 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D
Oct  2 19:44:29.695000 audit[2474]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.695000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc080fbbc0 a2=0 a3=7ffc080fbbac items=0 ppid=2353 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C
Oct  2 19:44:29.700000 audit[2475]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.700000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffec5411a10 a2=0 a3=7ffec54119fc items=0 ppid=2353 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.700000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174
Oct  2 19:44:29.708000 audit[2477]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.708000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff7d6df2c0 a2=0 a3=7fff7d6df2ac items=0 ppid=2353 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Oct  2 19:44:29.716000 audit[2480]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables"
Oct  2 19:44:29.716000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffdab135740 a2=0 a3=7ffdab13572c items=0 ppid=2353 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.716000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553
Oct  2 19:44:29.722000 audit[2484]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto"
Oct  2 19:44:29.722000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff1d8cecc0 a2=0 a3=7fff1d8cecac items=0 ppid=2353 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.722000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Oct  2 19:44:29.723000 audit[2484]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto"
Oct  2 19:44:29.723000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7fff1d8cecc0 a2=0 a3=7fff1d8cecac items=0 ppid=2353 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:44:29.723000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273
Oct  2 19:44:30.015426 kubelet[2091]: E1002 19:44:30.013611    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:44:30.651050 kubelet[2091]: W1002 19:44:30.651004    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638fe839_a332_4054_92a8_71460027ef59.slice/cri-containerd-785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59.scope WatchSource:0}: container "785292b1ebef8f746462037132ecba2deb85d45afb3f4f814c47ecffee31ee59" in namespace "k8s.io": not found
Oct  2 19:44:30.655673 kubelet[2091]: E1002 19:44:30.655640    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:30.786559 kubelet[2091]: E1002 19:44:30.786529    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:30.917581 update_engine[1623]: I1002 19:44:30.917083  1623 update_attempter.cc:505] Updating boot flags...
Oct  2 19:44:31.656199 kubelet[2091]: E1002 19:44:31.656149    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:32.656925 kubelet[2091]: E1002 19:44:32.656853    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:33.657008 kubelet[2091]: E1002 19:44:33.656949    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:33.763196 kubelet[2091]: W1002 19:44:33.763038    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638fe839_a332_4054_92a8_71460027ef59.slice/cri-containerd-5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb.scope WatchSource:0}: task 5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb not found: not found
Oct  2 19:44:34.657453 kubelet[2091]: E1002 19:44:34.657407    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:35.658467 kubelet[2091]: E1002 19:44:35.658410    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:35.787774 kubelet[2091]: E1002 19:44:35.787739    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:36.659088 kubelet[2091]: E1002 19:44:36.659045    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:37.659769 kubelet[2091]: E1002 19:44:37.659721    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:38.660813 kubelet[2091]: E1002 19:44:38.660764    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:39.661325 kubelet[2091]: E1002 19:44:39.661272    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:40.620526 kubelet[2091]: E1002 19:44:40.620468    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:40.662111 kubelet[2091]: E1002 19:44:40.662060    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:40.788577 kubelet[2091]: E1002 19:44:40.788548    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:41.662602 kubelet[2091]: E1002 19:44:41.662551    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:42.662895 kubelet[2091]: E1002 19:44:42.662842    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:42.940989 env[1632]: time="2023-10-02T19:44:42.940770217Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}"
Oct  2 19:44:42.956632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671307015.mount: Deactivated successfully.
Oct  2 19:44:42.964996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3889970809.mount: Deactivated successfully.
Oct  2 19:44:42.970837 env[1632]: time="2023-10-02T19:44:42.970779472Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506\""
Oct  2 19:44:42.972020 env[1632]: time="2023-10-02T19:44:42.971984097Z" level=info msg="StartContainer for \"3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506\""
Oct  2 19:44:42.996253 systemd[1]: Started cri-containerd-3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506.scope.
Oct  2 19:44:43.011575 systemd[1]: cri-containerd-3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506.scope: Deactivated successfully.
Oct  2 19:44:43.237557 env[1632]: time="2023-10-02T19:44:43.236420375Z" level=info msg="shim disconnected" id=3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506
Oct  2 19:44:43.237557 env[1632]: time="2023-10-02T19:44:43.236475529Z" level=warning msg="cleaning up after shim disconnected" id=3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506 namespace=k8s.io
Oct  2 19:44:43.237557 env[1632]: time="2023-10-02T19:44:43.236506858Z" level=info msg="cleaning up dead shim"
Oct  2 19:44:43.248240 env[1632]: time="2023-10-02T19:44:43.248181574Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:44:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2691 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:44:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:44:43.248576 env[1632]: time="2023-10-02T19:44:43.248501276Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed"
Oct  2 19:44:43.250807 env[1632]: time="2023-10-02T19:44:43.250748793Z" level=error msg="Failed to pipe stderr of container \"3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506\"" error="reading from a closed fifo"
Oct  2 19:44:43.251021 env[1632]: time="2023-10-02T19:44:43.250966459Z" level=error msg="Failed to pipe stdout of container \"3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506\"" error="reading from a closed fifo"
Oct  2 19:44:43.256108 env[1632]: time="2023-10-02T19:44:43.256043797Z" level=error msg="StartContainer for \"3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:44:43.256411 kubelet[2091]: E1002 19:44:43.256388    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506"
Oct  2 19:44:43.256590 kubelet[2091]: E1002 19:44:43.256532    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:44:43.256590 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:44:43.256590 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:44:43.256590 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dnj9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:44:43.256955 kubelet[2091]: E1002 19:44:43.256584    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:44:43.663816 kubelet[2091]: E1002 19:44:43.663763    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:43.953315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506-rootfs.mount: Deactivated successfully.
Oct  2 19:44:44.048631 kubelet[2091]: I1002 19:44:44.047770    2091 scope.go:115] "RemoveContainer" containerID="5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb"
Oct  2 19:44:44.048631 kubelet[2091]: I1002 19:44:44.048116    2091 scope.go:115] "RemoveContainer" containerID="5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb"
Oct  2 19:44:44.050208 env[1632]: time="2023-10-02T19:44:44.050162659Z" level=info msg="RemoveContainer for \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\""
Oct  2 19:44:44.051822 env[1632]: time="2023-10-02T19:44:44.050354646Z" level=info msg="RemoveContainer for \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\""
Oct  2 19:44:44.052695 env[1632]: time="2023-10-02T19:44:44.052646712Z" level=error msg="RemoveContainer for \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\" failed" error="failed to set removing state for container \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\": container is already in removing state"
Oct  2 19:44:44.052826 kubelet[2091]: E1002 19:44:44.052808    2091 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\": container is already in removing state" containerID="5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb"
Oct  2 19:44:44.052930 kubelet[2091]: I1002 19:44:44.052848    2091 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb} err="rpc error: code = Unknown desc = failed to set removing state for container \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\": container is already in removing state"
Oct  2 19:44:44.056093 env[1632]: time="2023-10-02T19:44:44.056057969Z" level=info msg="RemoveContainer for \"5d1a5bc6f879b5d17d5257e55b47529704fe0bbe1274cb9b45b1e799eedd56fb\" returns successfully"
Oct  2 19:44:44.056637 kubelet[2091]: E1002 19:44:44.056616    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:44:44.664429 kubelet[2091]: E1002 19:44:44.664389    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:45.664826 kubelet[2091]: E1002 19:44:45.664771    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:45.789888 kubelet[2091]: E1002 19:44:45.789860    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:46.342644 kubelet[2091]: W1002 19:44:46.342599    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638fe839_a332_4054_92a8_71460027ef59.slice/cri-containerd-3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506.scope WatchSource:0}: task 3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506 not found: not found
Oct  2 19:44:46.665311 kubelet[2091]: E1002 19:44:46.665257    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:47.665718 kubelet[2091]: E1002 19:44:47.665664    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:48.666528 kubelet[2091]: E1002 19:44:48.666475    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:49.667649 kubelet[2091]: E1002 19:44:49.667549    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:50.667819 kubelet[2091]: E1002 19:44:50.667753    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:50.790871 kubelet[2091]: E1002 19:44:50.790840    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:51.668467 kubelet[2091]: E1002 19:44:51.668418    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:52.669125 kubelet[2091]: E1002 19:44:52.669073    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:53.669510 kubelet[2091]: E1002 19:44:53.669451    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:54.670411 kubelet[2091]: E1002 19:44:54.670356    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:55.671171 kubelet[2091]: E1002 19:44:55.671122    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:55.792330 kubelet[2091]: E1002 19:44:55.792305    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:44:56.671591 kubelet[2091]: E1002 19:44:56.671548    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:57.671736 kubelet[2091]: E1002 19:44:57.671687    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:57.938409 kubelet[2091]: E1002 19:44:57.938085    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:44:58.672749 kubelet[2091]: E1002 19:44:58.672689    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:44:59.673622 kubelet[2091]: E1002 19:44:59.673571    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:00.621268 kubelet[2091]: E1002 19:45:00.620754    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:00.673833 kubelet[2091]: E1002 19:45:00.673718    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:00.793785 kubelet[2091]: E1002 19:45:00.793764    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:01.674931 kubelet[2091]: E1002 19:45:01.674875    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:02.675822 kubelet[2091]: E1002 19:45:02.675768    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:03.676847 kubelet[2091]: E1002 19:45:03.676795    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:04.677061 kubelet[2091]: E1002 19:45:04.676970    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:05.677647 kubelet[2091]: E1002 19:45:05.677543    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:05.795048 kubelet[2091]: E1002 19:45:05.795006    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:06.678541 kubelet[2091]: E1002 19:45:06.678479    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:07.679108 kubelet[2091]: E1002 19:45:07.679056    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:08.680197 kubelet[2091]: E1002 19:45:08.680144    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:09.681273 kubelet[2091]: E1002 19:45:09.681224    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:09.942113 env[1632]: time="2023-10-02T19:45:09.941853859Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}"
Oct  2 19:45:09.963160 env[1632]: time="2023-10-02T19:45:09.963069995Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\""
Oct  2 19:45:09.964126 env[1632]: time="2023-10-02T19:45:09.964058955Z" level=info msg="StartContainer for \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\""
Oct  2 19:45:10.025304 systemd[1]: Started cri-containerd-fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d.scope.
Oct  2 19:45:10.045544 systemd[1]: cri-containerd-fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d.scope: Deactivated successfully.
Oct  2 19:45:10.068946 env[1632]: time="2023-10-02T19:45:10.068723455Z" level=info msg="shim disconnected" id=fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d
Oct  2 19:45:10.068946 env[1632]: time="2023-10-02T19:45:10.068944179Z" level=warning msg="cleaning up after shim disconnected" id=fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d namespace=k8s.io
Oct  2 19:45:10.068946 env[1632]: time="2023-10-02T19:45:10.068985004Z" level=info msg="cleaning up dead shim"
Oct  2 19:45:10.088865 env[1632]: time="2023-10-02T19:45:10.088808454Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:45:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2734 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:45:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:45:10.089163 env[1632]: time="2023-10-02T19:45:10.089097141Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed"
Oct  2 19:45:10.092768 env[1632]: time="2023-10-02T19:45:10.092585262Z" level=error msg="Failed to pipe stdout of container \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\"" error="reading from a closed fifo"
Oct  2 19:45:10.092982 env[1632]: time="2023-10-02T19:45:10.092861969Z" level=error msg="Failed to pipe stderr of container \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\"" error="reading from a closed fifo"
Oct  2 19:45:10.096282 env[1632]: time="2023-10-02T19:45:10.096211774Z" level=error msg="StartContainer for \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:45:10.096646 kubelet[2091]: E1002 19:45:10.096622    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d"
Oct  2 19:45:10.096837 kubelet[2091]: E1002 19:45:10.096781    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:45:10.096837 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:45:10.096837 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:45:10.096837 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dnj9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:45:10.097064 kubelet[2091]: E1002 19:45:10.096830    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:45:10.100179 kubelet[2091]: I1002 19:45:10.100019    2091 scope.go:115] "RemoveContainer" containerID="3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506"
Oct  2 19:45:10.101835 env[1632]: time="2023-10-02T19:45:10.101798831Z" level=info msg="RemoveContainer for \"3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506\""
Oct  2 19:45:10.107551 env[1632]: time="2023-10-02T19:45:10.107503232Z" level=info msg="RemoveContainer for \"3cfd967b5b463e75f3c1414db6e336f2f25c2066ebbcc3541245867b5f657506\" returns successfully"
Oct  2 19:45:10.682427 kubelet[2091]: E1002 19:45:10.682377    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:10.795853 kubelet[2091]: E1002 19:45:10.795826    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:10.956185 systemd[1]: run-containerd-runc-k8s.io-fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d-runc.N1v14D.mount: Deactivated successfully.
Oct  2 19:45:10.956312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d-rootfs.mount: Deactivated successfully.
Oct  2 19:45:11.103176 kubelet[2091]: E1002 19:45:11.103130    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:45:11.683195 kubelet[2091]: E1002 19:45:11.683138    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:12.683687 kubelet[2091]: E1002 19:45:12.683583    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:13.174335 kubelet[2091]: W1002 19:45:13.174291    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638fe839_a332_4054_92a8_71460027ef59.slice/cri-containerd-fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d.scope WatchSource:0}: task fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d not found: not found
Oct  2 19:45:13.683756 kubelet[2091]: E1002 19:45:13.683703    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:14.684348 kubelet[2091]: E1002 19:45:14.684305    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:15.686402 kubelet[2091]: E1002 19:45:15.686270    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:15.797446 kubelet[2091]: E1002 19:45:15.797414    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:16.687037 kubelet[2091]: E1002 19:45:16.686986    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:17.687734 kubelet[2091]: E1002 19:45:17.687679    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:18.688974 kubelet[2091]: E1002 19:45:18.688709    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:19.689345 kubelet[2091]: E1002 19:45:19.689292    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:20.620868 kubelet[2091]: E1002 19:45:20.620816    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:20.690360 kubelet[2091]: E1002 19:45:20.690305    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:20.798680 kubelet[2091]: E1002 19:45:20.798650    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:21.691504 kubelet[2091]: E1002 19:45:21.691459    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:22.691615 kubelet[2091]: E1002 19:45:22.691566    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:23.692445 kubelet[2091]: E1002 19:45:23.692253    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:24.693256 kubelet[2091]: E1002 19:45:24.693204    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:25.693410 kubelet[2091]: E1002 19:45:25.693353    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:25.799561 kubelet[2091]: E1002 19:45:25.799526    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:26.693985 kubelet[2091]: E1002 19:45:26.693931    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:26.939608 kubelet[2091]: E1002 19:45:26.938768    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:45:27.694123 kubelet[2091]: E1002 19:45:27.694082    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:28.694461 kubelet[2091]: E1002 19:45:28.694412    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:29.694934 kubelet[2091]: E1002 19:45:29.694882    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:30.695425 kubelet[2091]: E1002 19:45:30.695319    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:30.799990 kubelet[2091]: E1002 19:45:30.799961    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:31.695725 kubelet[2091]: E1002 19:45:31.695670    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:32.696196 kubelet[2091]: E1002 19:45:32.696147    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:33.696401 kubelet[2091]: E1002 19:45:33.696294    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:34.696786 kubelet[2091]: E1002 19:45:34.696730    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:35.697367 kubelet[2091]: E1002 19:45:35.697318    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:35.802323 kubelet[2091]: E1002 19:45:35.802269    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:36.698533 kubelet[2091]: E1002 19:45:36.698473    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:37.698816 kubelet[2091]: E1002 19:45:37.698763    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:38.699933 kubelet[2091]: E1002 19:45:38.699879    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:39.700584 kubelet[2091]: E1002 19:45:39.700532    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:39.938166 kubelet[2091]: E1002 19:45:39.938123    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:45:40.620595 kubelet[2091]: E1002 19:45:40.620544    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:40.700963 kubelet[2091]: E1002 19:45:40.700920    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:40.803740 kubelet[2091]: E1002 19:45:40.803711    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:41.702072 kubelet[2091]: E1002 19:45:41.702016    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:42.702753 kubelet[2091]: E1002 19:45:42.702711    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:43.703529 kubelet[2091]: E1002 19:45:43.703464    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:44.704501 kubelet[2091]: E1002 19:45:44.704441    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:45.705582 kubelet[2091]: E1002 19:45:45.705531    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:45.805020 kubelet[2091]: E1002 19:45:45.804855    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:46.706209 kubelet[2091]: E1002 19:45:46.706156    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:47.706566 kubelet[2091]: E1002 19:45:47.706515    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:48.706684 kubelet[2091]: E1002 19:45:48.706631    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:49.707251 kubelet[2091]: E1002 19:45:49.707118    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:50.707673 kubelet[2091]: E1002 19:45:50.707618    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:50.805883 kubelet[2091]: E1002 19:45:50.805850    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:51.708631 kubelet[2091]: E1002 19:45:51.708576    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:52.709399 kubelet[2091]: E1002 19:45:52.709357    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:52.942048 env[1632]: time="2023-10-02T19:45:52.941993602Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}"
Oct  2 19:45:52.962066 env[1632]: time="2023-10-02T19:45:52.961128648Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\""
Oct  2 19:45:52.962618 env[1632]: time="2023-10-02T19:45:52.962397468Z" level=info msg="StartContainer for \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\""
Oct  2 19:45:53.009341 systemd[1]: Started cri-containerd-9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61.scope.
Oct  2 19:45:53.036718 systemd[1]: cri-containerd-9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61.scope: Deactivated successfully.
Oct  2 19:45:53.058358 env[1632]: time="2023-10-02T19:45:53.058287239Z" level=info msg="shim disconnected" id=9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61
Oct  2 19:45:53.058358 env[1632]: time="2023-10-02T19:45:53.058357267Z" level=warning msg="cleaning up after shim disconnected" id=9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61 namespace=k8s.io
Oct  2 19:45:53.059197 env[1632]: time="2023-10-02T19:45:53.058369104Z" level=info msg="cleaning up dead shim"
Oct  2 19:45:53.071743 env[1632]: time="2023-10-02T19:45:53.071689796Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:45:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2778 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:45:53Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:45:53.072236 env[1632]: time="2023-10-02T19:45:53.072175949Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed"
Oct  2 19:45:53.072577 env[1632]: time="2023-10-02T19:45:53.072526420Z" level=error msg="Failed to pipe stderr of container \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\"" error="reading from a closed fifo"
Oct  2 19:45:53.073071 env[1632]: time="2023-10-02T19:45:53.073019613Z" level=error msg="Failed to pipe stdout of container \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\"" error="reading from a closed fifo"
Oct  2 19:45:53.075597 env[1632]: time="2023-10-02T19:45:53.075547534Z" level=error msg="StartContainer for \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:45:53.075795 kubelet[2091]: E1002 19:45:53.075773    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61"
Oct  2 19:45:53.075922 kubelet[2091]: E1002 19:45:53.075900    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:45:53.075922 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:45:53.075922 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:45:53.075922 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dnj9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:45:53.076250 kubelet[2091]: E1002 19:45:53.075948    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:45:53.181929 kubelet[2091]: I1002 19:45:53.181897    2091 scope.go:115] "RemoveContainer" containerID="fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d"
Oct  2 19:45:53.182278 kubelet[2091]: I1002 19:45:53.182254    2091 scope.go:115] "RemoveContainer" containerID="fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d"
Oct  2 19:45:53.189214 env[1632]: time="2023-10-02T19:45:53.189172508Z" level=info msg="RemoveContainer for \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\""
Oct  2 19:45:53.190076 env[1632]: time="2023-10-02T19:45:53.190044219Z" level=info msg="RemoveContainer for \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\""
Oct  2 19:45:53.190206 env[1632]: time="2023-10-02T19:45:53.190138576Z" level=error msg="RemoveContainer for \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\" failed" error="failed to set removing state for container \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\": container is already in removing state"
Oct  2 19:45:53.190446 kubelet[2091]: E1002 19:45:53.190414    2091 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\": container is already in removing state" containerID="fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d"
Oct  2 19:45:53.190704 kubelet[2091]: E1002 19:45:53.190463    2091 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d": container is already in removing state; Skipping pod "cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)"
Oct  2 19:45:53.191309 kubelet[2091]: E1002 19:45:53.191275    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:45:53.196053 env[1632]: time="2023-10-02T19:45:53.196008096Z" level=info msg="RemoveContainer for \"fb6c1d71bdc7cda2b54df94134d267a3079142f855252f6eeb91f19964e2a69d\" returns successfully"
Oct  2 19:45:53.710361 kubelet[2091]: E1002 19:45:53.710321    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:53.953673 systemd[1]: run-containerd-runc-k8s.io-9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61-runc.d4Bh45.mount: Deactivated successfully.
Oct  2 19:45:53.953792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61-rootfs.mount: Deactivated successfully.
Oct  2 19:45:54.710749 kubelet[2091]: E1002 19:45:54.710702    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:55.711754 kubelet[2091]: E1002 19:45:55.711703    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:55.806946 kubelet[2091]: E1002 19:45:55.806898    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:45:56.163193 kubelet[2091]: W1002 19:45:56.163043    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638fe839_a332_4054_92a8_71460027ef59.slice/cri-containerd-9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61.scope WatchSource:0}: task 9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61 not found: not found
Oct  2 19:45:56.712697 kubelet[2091]: E1002 19:45:56.712646    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:57.713749 kubelet[2091]: E1002 19:45:57.713696    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:58.714305 kubelet[2091]: E1002 19:45:58.714254    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:45:59.715093 kubelet[2091]: E1002 19:45:59.715032    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:00.620796 kubelet[2091]: E1002 19:46:00.620740    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:00.715838 kubelet[2091]: E1002 19:46:00.715784    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:00.808456 kubelet[2091]: E1002 19:46:00.808427    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:01.716879 kubelet[2091]: E1002 19:46:01.716821    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:02.717194 kubelet[2091]: E1002 19:46:02.717050    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:03.717356 kubelet[2091]: E1002 19:46:03.717288    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:04.717564 kubelet[2091]: E1002 19:46:04.717511    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:05.717852 kubelet[2091]: E1002 19:46:05.717803    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:05.810136 kubelet[2091]: E1002 19:46:05.810110    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:06.719114 kubelet[2091]: E1002 19:46:06.718984    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:07.719568 kubelet[2091]: E1002 19:46:07.719515    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:07.937839 kubelet[2091]: E1002 19:46:07.937806    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:46:08.720600 kubelet[2091]: E1002 19:46:08.720550    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:09.720956 kubelet[2091]: E1002 19:46:09.720865    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:10.721166 kubelet[2091]: E1002 19:46:10.721118    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:10.812426 kubelet[2091]: E1002 19:46:10.812376    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:11.721692 kubelet[2091]: E1002 19:46:11.721642    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:12.721975 kubelet[2091]: E1002 19:46:12.721932    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:13.723073 kubelet[2091]: E1002 19:46:13.723012    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:14.723520 kubelet[2091]: E1002 19:46:14.723452    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:15.724571 kubelet[2091]: E1002 19:46:15.724518    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:15.813241 kubelet[2091]: E1002 19:46:15.813200    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:16.724767 kubelet[2091]: E1002 19:46:16.724726    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:17.726718 kubelet[2091]: E1002 19:46:17.726657    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:18.726912 kubelet[2091]: E1002 19:46:18.726860    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:19.727417 kubelet[2091]: E1002 19:46:19.727365    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:20.620506 kubelet[2091]: E1002 19:46:20.620450    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:20.728343 kubelet[2091]: E1002 19:46:20.728293    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:20.814833 kubelet[2091]: E1002 19:46:20.814800    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:21.728701 kubelet[2091]: E1002 19:46:21.728653    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:21.937902 kubelet[2091]: E1002 19:46:21.937858    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:46:22.728898 kubelet[2091]: E1002 19:46:22.728839    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:23.729957 kubelet[2091]: E1002 19:46:23.729913    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:24.731008 kubelet[2091]: E1002 19:46:24.730956    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:25.732046 kubelet[2091]: E1002 19:46:25.731987    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:25.816620 kubelet[2091]: E1002 19:46:25.816577    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:26.733208 kubelet[2091]: E1002 19:46:26.733155    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:27.733763 kubelet[2091]: E1002 19:46:27.733711    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:28.734239 kubelet[2091]: E1002 19:46:28.734197    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:29.734733 kubelet[2091]: E1002 19:46:29.734677    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:30.734998 kubelet[2091]: E1002 19:46:30.734916    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:30.817799 kubelet[2091]: E1002 19:46:30.817763    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:31.736095 kubelet[2091]: E1002 19:46:31.736040    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:32.737250 kubelet[2091]: E1002 19:46:32.737193    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:33.737717 kubelet[2091]: E1002 19:46:33.737667    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:33.937604 kubelet[2091]: E1002 19:46:33.937561    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:46:34.737933 kubelet[2091]: E1002 19:46:34.737885    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:35.738348 kubelet[2091]: E1002 19:46:35.738305    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:35.818474 kubelet[2091]: E1002 19:46:35.818439    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:36.738792 kubelet[2091]: E1002 19:46:36.738740    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:37.739276 kubelet[2091]: E1002 19:46:37.739235    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:38.740211 kubelet[2091]: E1002 19:46:38.740158    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:39.740524 kubelet[2091]: E1002 19:46:39.740459    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:40.620597 kubelet[2091]: E1002 19:46:40.620464    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:40.741374 kubelet[2091]: E1002 19:46:40.741325    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:40.819906 kubelet[2091]: E1002 19:46:40.819876    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:41.742136 kubelet[2091]: E1002 19:46:41.742084    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:42.743057 kubelet[2091]: E1002 19:46:42.742959    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:43.743741 kubelet[2091]: E1002 19:46:43.743700    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:44.744259 kubelet[2091]: E1002 19:46:44.744206    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:44.938351 kubelet[2091]: E1002 19:46:44.938137    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:46:45.744406 kubelet[2091]: E1002 19:46:45.744354    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:45.820716 kubelet[2091]: E1002 19:46:45.820684    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:46.745363 kubelet[2091]: E1002 19:46:46.745318    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:47.745842 kubelet[2091]: E1002 19:46:47.745791    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:48.746595 kubelet[2091]: E1002 19:46:48.746546    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:49.747611 kubelet[2091]: E1002 19:46:49.747559    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:50.748158 kubelet[2091]: E1002 19:46:50.748037    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:50.822199 kubelet[2091]: E1002 19:46:50.822095    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:51.748855 kubelet[2091]: E1002 19:46:51.748709    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:52.749031 kubelet[2091]: E1002 19:46:52.748971    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:53.749583 kubelet[2091]: E1002 19:46:53.749529    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:54.749687 kubelet[2091]: E1002 19:46:54.749632    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:55.750765 kubelet[2091]: E1002 19:46:55.750714    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:55.822886 kubelet[2091]: E1002 19:46:55.822861    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:46:56.750894 kubelet[2091]: E1002 19:46:56.750843    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:57.751612 kubelet[2091]: E1002 19:46:57.751557    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:58.752209 kubelet[2091]: E1002 19:46:58.752152    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:46:58.938248 kubelet[2091]: E1002 19:46:58.938208    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:46:59.752364 kubelet[2091]: E1002 19:46:59.752288    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:00.620571 kubelet[2091]: E1002 19:47:00.620523    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:00.752989 kubelet[2091]: E1002 19:47:00.752939    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:00.823499 kubelet[2091]: E1002 19:47:00.823453    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:01.753618 kubelet[2091]: E1002 19:47:01.753567    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:02.753721 kubelet[2091]: E1002 19:47:02.753665    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:03.754272 kubelet[2091]: E1002 19:47:03.754220    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:04.755233 kubelet[2091]: E1002 19:47:04.755189    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:05.755410 kubelet[2091]: E1002 19:47:05.755371    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:05.824379 kubelet[2091]: E1002 19:47:05.824275    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:06.756097 kubelet[2091]: E1002 19:47:06.756054    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:07.757138 kubelet[2091]: E1002 19:47:07.757096    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:08.757304 kubelet[2091]: E1002 19:47:08.757248    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:09.757502 kubelet[2091]: E1002 19:47:09.757410    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:10.758192 kubelet[2091]: E1002 19:47:10.758152    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:10.840549 kubelet[2091]: E1002 19:47:10.840521    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:11.759096 kubelet[2091]: E1002 19:47:11.759052    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:12.759608 kubelet[2091]: E1002 19:47:12.759555    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:13.759991 kubelet[2091]: E1002 19:47:13.759941    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:13.943436 env[1632]: time="2023-10-02T19:47:13.943353233Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}"
Oct  2 19:47:13.969354 env[1632]: time="2023-10-02T19:47:13.969299351Z" level=info msg="CreateContainer within sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8\""
Oct  2 19:47:13.970070 env[1632]: time="2023-10-02T19:47:13.970037375Z" level=info msg="StartContainer for \"01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8\""
Oct  2 19:47:14.019711 systemd[1]: Started cri-containerd-01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8.scope.
Oct  2 19:47:14.036242 systemd[1]: cri-containerd-01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8.scope: Deactivated successfully.
Oct  2 19:47:14.055341 env[1632]: time="2023-10-02T19:47:14.055279163Z" level=info msg="shim disconnected" id=01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8
Oct  2 19:47:14.055341 env[1632]: time="2023-10-02T19:47:14.055334893Z" level=warning msg="cleaning up after shim disconnected" id=01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8 namespace=k8s.io
Oct  2 19:47:14.055341 env[1632]: time="2023-10-02T19:47:14.055347615Z" level=info msg="cleaning up dead shim"
Oct  2 19:47:14.067428 env[1632]: time="2023-10-02T19:47:14.067369362Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2825 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:47:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:47:14.067749 env[1632]: time="2023-10-02T19:47:14.067685301Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed"
Oct  2 19:47:14.067969 env[1632]: time="2023-10-02T19:47:14.067922623Z" level=error msg="Failed to pipe stdout of container \"01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8\"" error="reading from a closed fifo"
Oct  2 19:47:14.068322 env[1632]: time="2023-10-02T19:47:14.068080788Z" level=error msg="Failed to pipe stderr of container \"01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8\"" error="reading from a closed fifo"
Oct  2 19:47:14.070111 env[1632]: time="2023-10-02T19:47:14.070067017Z" level=error msg="StartContainer for \"01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:47:14.070340 kubelet[2091]: E1002 19:47:14.070309    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8"
Oct  2 19:47:14.070464 kubelet[2091]: E1002 19:47:14.070442    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:47:14.070464 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:47:14.070464 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:47:14.070464 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dnj9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:47:14.070861 kubelet[2091]: E1002 19:47:14.070649    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:47:14.333202 kubelet[2091]: I1002 19:47:14.333085    2091 scope.go:115] "RemoveContainer" containerID="9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61"
Oct  2 19:47:14.334463 kubelet[2091]: I1002 19:47:14.334443    2091 scope.go:115] "RemoveContainer" containerID="9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61"
Oct  2 19:47:14.339473 env[1632]: time="2023-10-02T19:47:14.339424632Z" level=info msg="RemoveContainer for \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\""
Oct  2 19:47:14.339782 env[1632]: time="2023-10-02T19:47:14.339750155Z" level=info msg="RemoveContainer for \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\""
Oct  2 19:47:14.339965 env[1632]: time="2023-10-02T19:47:14.339923159Z" level=error msg="RemoveContainer for \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\" failed" error="failed to set removing state for container \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\": container is already in removing state"
Oct  2 19:47:14.340140 kubelet[2091]: E1002 19:47:14.340115    2091 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\": container is already in removing state" containerID="9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61"
Oct  2 19:47:14.340235 kubelet[2091]: I1002 19:47:14.340160    2091 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61} err="rpc error: code = Unknown desc = failed to set removing state for container \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\": container is already in removing state"
Oct  2 19:47:14.348191 env[1632]: time="2023-10-02T19:47:14.348136571Z" level=info msg="RemoveContainer for \"9b99596981cf5303f4105d7e160f579e889e96ebf925fb0046f97a66668f5f61\" returns successfully"
Oct  2 19:47:14.348724 kubelet[2091]: E1002 19:47:14.348701    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-xpqsm_kube-system(638fe839-a332-4054-92a8-71460027ef59)\"" pod="kube-system/cilium-xpqsm" podUID=638fe839-a332-4054-92a8-71460027ef59
Oct  2 19:47:14.760347 kubelet[2091]: E1002 19:47:14.760309    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:14.959773 systemd[1]: run-containerd-runc-k8s.io-01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8-runc.VkBbUq.mount: Deactivated successfully.
Oct  2 19:47:14.959925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8-rootfs.mount: Deactivated successfully.
Oct  2 19:47:15.760459 kubelet[2091]: E1002 19:47:15.760404    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:15.841886 kubelet[2091]: E1002 19:47:15.841857    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:16.760637 kubelet[2091]: E1002 19:47:16.760584    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:17.160764 kubelet[2091]: W1002 19:47:17.160720    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638fe839_a332_4054_92a8_71460027ef59.slice/cri-containerd-01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8.scope WatchSource:0}: task 01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8 not found: not found
Oct  2 19:47:17.761635 kubelet[2091]: E1002 19:47:17.761581    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:18.762610 kubelet[2091]: E1002 19:47:18.762556    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:19.763598 kubelet[2091]: E1002 19:47:19.763556    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:20.620977 kubelet[2091]: E1002 19:47:20.620925    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:20.763899 kubelet[2091]: E1002 19:47:20.763846    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:20.843444 kubelet[2091]: E1002 19:47:20.843414    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:21.764053 kubelet[2091]: E1002 19:47:21.764007    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:22.764725 kubelet[2091]: E1002 19:47:22.764672    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:23.765502 kubelet[2091]: E1002 19:47:23.765443    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:24.765938 kubelet[2091]: E1002 19:47:24.765882    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:25.766412 kubelet[2091]: E1002 19:47:25.766360    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:25.844263 kubelet[2091]: E1002 19:47:25.844234    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:25.910762 env[1632]: time="2023-10-02T19:47:25.910721179Z" level=info msg="StopPodSandbox for \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\""
Oct  2 19:47:25.913238 env[1632]: time="2023-10-02T19:47:25.910788302Z" level=info msg="Container to stop \"01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Oct  2 19:47:25.912759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe-shm.mount: Deactivated successfully.
Oct  2 19:47:25.920000 audit: BPF prog-id=76 op=UNLOAD
Oct  2 19:47:25.921064 systemd[1]: cri-containerd-1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe.scope: Deactivated successfully.
Oct  2 19:47:25.922661 kernel: kauditd_printk_skb: 165 callbacks suppressed
Oct  2 19:47:25.922758 kernel: audit: type=1334 audit(1696276045.920:735): prog-id=76 op=UNLOAD
Oct  2 19:47:25.926000 audit: BPF prog-id=79 op=UNLOAD
Oct  2 19:47:25.929519 kernel: audit: type=1334 audit(1696276045.926:736): prog-id=79 op=UNLOAD
Oct  2 19:47:25.948618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe-rootfs.mount: Deactivated successfully.
Oct  2 19:47:25.964964 env[1632]: time="2023-10-02T19:47:25.964907195Z" level=info msg="shim disconnected" id=1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe
Oct  2 19:47:25.965330 env[1632]: time="2023-10-02T19:47:25.965286699Z" level=warning msg="cleaning up after shim disconnected" id=1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe namespace=k8s.io
Oct  2 19:47:25.965330 env[1632]: time="2023-10-02T19:47:25.965307785Z" level=info msg="cleaning up dead shim"
Oct  2 19:47:25.974613 env[1632]: time="2023-10-02T19:47:25.974561145Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2855 runtime=io.containerd.runc.v2\n"
Oct  2 19:47:25.974934 env[1632]: time="2023-10-02T19:47:25.974901136Z" level=info msg="TearDown network for sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" successfully"
Oct  2 19:47:25.975039 env[1632]: time="2023-10-02T19:47:25.974932266Z" level=info msg="StopPodSandbox for \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" returns successfully"
Oct  2 19:47:26.068010 kubelet[2091]: I1002 19:47:26.067199    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-hostproc\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068010 kubelet[2091]: I1002 19:47:26.067252    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cni-path\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068010 kubelet[2091]: I1002 19:47:26.067278    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-xtables-lock\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068010 kubelet[2091]: I1002 19:47:26.067305    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-etc-cni-netd\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068010 kubelet[2091]: I1002 19:47:26.067328    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cilium-cgroup\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068010 kubelet[2091]: I1002 19:47:26.067355    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-lib-modules\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068431 kubelet[2091]: I1002 19:47:26.067392    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/638fe839-a332-4054-92a8-71460027ef59-cilium-config-path\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068431 kubelet[2091]: I1002 19:47:26.067416    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cilium-run\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068431 kubelet[2091]: I1002 19:47:26.067455    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/638fe839-a332-4054-92a8-71460027ef59-clustermesh-secrets\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068431 kubelet[2091]: I1002 19:47:26.067494    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-host-proc-sys-net\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068431 kubelet[2091]: I1002 19:47:26.067525    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-host-proc-sys-kernel\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068431 kubelet[2091]: I1002 19:47:26.067697    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/638fe839-a332-4054-92a8-71460027ef59-hubble-tls\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068729 kubelet[2091]: I1002 19:47:26.067737    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnj9q\" (UniqueName: \"kubernetes.io/projected/638fe839-a332-4054-92a8-71460027ef59-kube-api-access-dnj9q\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068729 kubelet[2091]: I1002 19:47:26.067766    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-bpf-maps\") pod \"638fe839-a332-4054-92a8-71460027ef59\" (UID: \"638fe839-a332-4054-92a8-71460027ef59\") "
Oct  2 19:47:26.068729 kubelet[2091]: I1002 19:47:26.067869    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.068729 kubelet[2091]: I1002 19:47:26.067914    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-hostproc" (OuterVolumeSpecName: "hostproc") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.068729 kubelet[2091]: I1002 19:47:26.067935    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cni-path" (OuterVolumeSpecName: "cni-path") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.069187 kubelet[2091]: I1002 19:47:26.068014    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.069187 kubelet[2091]: I1002 19:47:26.068034    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.069187 kubelet[2091]: I1002 19:47:26.068051    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.069187 kubelet[2091]: I1002 19:47:26.068068    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.069187 kubelet[2091]: W1002 19:47:26.068251    2091 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/638fe839-a332-4054-92a8-71460027ef59/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Oct  2 19:47:26.071649 kubelet[2091]: I1002 19:47:26.069478    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.071649 kubelet[2091]: I1002 19:47:26.069552    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.071649 kubelet[2091]: I1002 19:47:26.071322    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638fe839-a332-4054-92a8-71460027ef59-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Oct  2 19:47:26.071649 kubelet[2091]: I1002 19:47:26.071599    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:26.077146 systemd[1]: var-lib-kubelet-pods-638fe839\x2da332\x2d4054\x2d92a8\x2d71460027ef59-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Oct  2 19:47:26.077496 systemd[1]: var-lib-kubelet-pods-638fe839\x2da332\x2d4054\x2d92a8\x2d71460027ef59-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Oct  2 19:47:26.081458 systemd[1]: var-lib-kubelet-pods-638fe839\x2da332\x2d4054\x2d92a8\x2d71460027ef59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddnj9q.mount: Deactivated successfully.
Oct  2 19:47:26.084409 kubelet[2091]: I1002 19:47:26.084374    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638fe839-a332-4054-92a8-71460027ef59-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct  2 19:47:26.086204 kubelet[2091]: I1002 19:47:26.086168    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638fe839-a332-4054-92a8-71460027ef59-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Oct  2 19:47:26.087278 kubelet[2091]: I1002 19:47:26.087218    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638fe839-a332-4054-92a8-71460027ef59-kube-api-access-dnj9q" (OuterVolumeSpecName: "kube-api-access-dnj9q") pod "638fe839-a332-4054-92a8-71460027ef59" (UID: "638fe839-a332-4054-92a8-71460027ef59"). InnerVolumeSpecName "kube-api-access-dnj9q". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct  2 19:47:26.168125 kubelet[2091]: I1002 19:47:26.168086    2091 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cni-path\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168125 kubelet[2091]: I1002 19:47:26.168124    2091 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-hostproc\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168125 kubelet[2091]: I1002 19:47:26.168139    2091 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cilium-cgroup\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168457 kubelet[2091]: I1002 19:47:26.168153    2091 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-lib-modules\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168457 kubelet[2091]: I1002 19:47:26.168165    2091 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-xtables-lock\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168457 kubelet[2091]: I1002 19:47:26.168178    2091 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-etc-cni-netd\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168457 kubelet[2091]: I1002 19:47:26.168190    2091 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/638fe839-a332-4054-92a8-71460027ef59-clustermesh-secrets\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168457 kubelet[2091]: I1002 19:47:26.168203    2091 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-host-proc-sys-net\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168457 kubelet[2091]: I1002 19:47:26.168222    2091 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-host-proc-sys-kernel\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168457 kubelet[2091]: I1002 19:47:26.168234    2091 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/638fe839-a332-4054-92a8-71460027ef59-cilium-config-path\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168457 kubelet[2091]: I1002 19:47:26.168247    2091 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-cilium-run\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168770 kubelet[2091]: I1002 19:47:26.168260    2091 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/638fe839-a332-4054-92a8-71460027ef59-bpf-maps\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168770 kubelet[2091]: I1002 19:47:26.168274    2091 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/638fe839-a332-4054-92a8-71460027ef59-hubble-tls\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.168770 kubelet[2091]: I1002 19:47:26.168315    2091 reconciler.go:399] "Volume detached for volume \"kube-api-access-dnj9q\" (UniqueName: \"kubernetes.io/projected/638fe839-a332-4054-92a8-71460027ef59-kube-api-access-dnj9q\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:26.361054 kubelet[2091]: I1002 19:47:26.357301    2091 scope.go:115] "RemoveContainer" containerID="01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8"
Oct  2 19:47:26.375473 env[1632]: time="2023-10-02T19:47:26.375134729Z" level=info msg="RemoveContainer for \"01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8\""
Oct  2 19:47:26.382945 env[1632]: time="2023-10-02T19:47:26.382796175Z" level=info msg="RemoveContainer for \"01f3261f4ffd769b699b68eb422f6b4b8193cd7a3a4a39b01f19e0e5f2c3d1b8\" returns successfully"
Oct  2 19:47:26.382869 systemd[1]: Removed slice kubepods-burstable-pod638fe839_a332_4054_92a8_71460027ef59.slice.
Oct  2 19:47:26.423240 kubelet[2091]: I1002 19:47:26.423206    2091 topology_manager.go:205] "Topology Admit Handler"
Oct  2 19:47:26.423549 kubelet[2091]: E1002 19:47:26.423259    2091 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: E1002 19:47:26.423273    2091 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: E1002 19:47:26.423281    2091 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: E1002 19:47:26.423289    2091 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: I1002 19:47:26.423311    2091 memory_manager.go:345] "RemoveStaleState removing state" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: I1002 19:47:26.423319    2091 memory_manager.go:345] "RemoveStaleState removing state" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: I1002 19:47:26.423326    2091 memory_manager.go:345] "RemoveStaleState removing state" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: I1002 19:47:26.423335    2091 memory_manager.go:345] "RemoveStaleState removing state" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: I1002 19:47:26.423342    2091 memory_manager.go:345] "RemoveStaleState removing state" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: E1002 19:47:26.423363    2091 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: E1002 19:47:26.423454    2091 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.423549 kubelet[2091]: I1002 19:47:26.423494    2091 memory_manager.go:345] "RemoveStaleState removing state" podUID="638fe839-a332-4054-92a8-71460027ef59" containerName="mount-cgroup"
Oct  2 19:47:26.433157 systemd[1]: Created slice kubepods-burstable-pod96b8c903_a200_4c86_8aef_9fbe94ca5cc9.slice.
Oct  2 19:47:26.585764 kubelet[2091]: I1002 19:47:26.585726    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-bpf-maps\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.585974 kubelet[2091]: I1002 19:47:26.585832    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-host-proc-sys-kernel\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.585974 kubelet[2091]: I1002 19:47:26.585894    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-config-path\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.585974 kubelet[2091]: I1002 19:47:26.585929    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-hubble-tls\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.585974 kubelet[2091]: I1002 19:47:26.585968    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8fwj\" (UniqueName: \"kubernetes.io/projected/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-kube-api-access-j8fwj\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.586172 kubelet[2091]: I1002 19:47:26.586004    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cni-path\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.586172 kubelet[2091]: I1002 19:47:26.586053    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-xtables-lock\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.586172 kubelet[2091]: I1002 19:47:26.586107    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-run\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.586172 kubelet[2091]: I1002 19:47:26.586150    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-hostproc\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.586359 kubelet[2091]: I1002 19:47:26.586182    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-cgroup\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.586359 kubelet[2091]: I1002 19:47:26.586219    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-etc-cni-netd\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.586359 kubelet[2091]: I1002 19:47:26.586251    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-lib-modules\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.586359 kubelet[2091]: I1002 19:47:26.586283    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-clustermesh-secrets\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.586359 kubelet[2091]: I1002 19:47:26.586314    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-host-proc-sys-net\") pod \"cilium-bgqvw\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") " pod="kube-system/cilium-bgqvw"
Oct  2 19:47:26.744701 env[1632]: time="2023-10-02T19:47:26.744516622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bgqvw,Uid:96b8c903-a200-4c86-8aef-9fbe94ca5cc9,Namespace:kube-system,Attempt:0,}"
Oct  2 19:47:26.763066 env[1632]: time="2023-10-02T19:47:26.762987287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct  2 19:47:26.763066 env[1632]: time="2023-10-02T19:47:26.763030111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct  2 19:47:26.763458 env[1632]: time="2023-10-02T19:47:26.763048247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct  2 19:47:26.763458 env[1632]: time="2023-10-02T19:47:26.763220836Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea pid=2881 runtime=io.containerd.runc.v2
Oct  2 19:47:26.766605 kubelet[2091]: E1002 19:47:26.766543    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:26.783471 systemd[1]: Started cri-containerd-b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea.scope.
Oct  2 19:47:26.809338 kernel: audit: type=1400 audit(1696276046.799:737): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.809529 kernel: audit: type=1400 audit(1696276046.799:738): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.799000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.799000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.799000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819356 kernel: audit: type=1400 audit(1696276046.799:739): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819576 kernel: audit: type=1400 audit(1696276046.799:740): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.799000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.799000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.829319 kernel: audit: type=1400 audit(1696276046.799:741): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.829437 kernel: audit: type=1400 audit(1696276046.799:742): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.799000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.799000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.838857 kernel: audit: type=1400 audit(1696276046.799:743): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.839070 kernel: audit: type=1400 audit(1696276046.799:744): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.799000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.799000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.809000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.809000 audit: BPF prog-id=87 op=LOAD
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bfc48 a2=10 a3=1c items=0 ppid=2881 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:26.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323230393530303233336439366266643637376163326363373263
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bf6b0 a2=3c a3=c items=0 ppid=2881 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:26.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323230393530303233336439366266643637376163326363373263
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.812000 audit: BPF prog-id=88 op=LOAD
Oct  2 19:47:26.812000 audit[2892]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bf9d8 a2=78 a3=c000098b50 items=0 ppid=2881 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:26.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323230393530303233336439366266643637376163326363373263
Oct  2 19:47:26.813000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.813000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.813000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.813000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.813000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.813000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.813000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.813000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.813000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.813000 audit: BPF prog-id=89 op=LOAD
Oct  2 19:47:26.813000 audit[2892]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bf770 a2=78 a3=c000098b98 items=0 ppid=2881 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:26.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323230393530303233336439366266643637376163326363373263
Oct  2 19:47:26.818000 audit: BPF prog-id=89 op=UNLOAD
Oct  2 19:47:26.819000 audit: BPF prog-id=88 op=UNLOAD
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { perfmon } for  pid=2892 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit[2892]: AVC avc:  denied  { bpf } for  pid=2892 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:26.819000 audit: BPF prog-id=90 op=LOAD
Oct  2 19:47:26.819000 audit[2892]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bfc30 a2=78 a3=c000098fa8 items=0 ppid=2881 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:26.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238323230393530303233336439366266643637376163326363373263
Oct  2 19:47:26.854114 env[1632]: time="2023-10-02T19:47:26.854077152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bgqvw,Uid:96b8c903-a200-4c86-8aef-9fbe94ca5cc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\""
Oct  2 19:47:26.857099 env[1632]: time="2023-10-02T19:47:26.857053123Z" level=info msg="CreateContainer within sandbox \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Oct  2 19:47:26.875977 env[1632]: time="2023-10-02T19:47:26.875918357Z" level=info msg="CreateContainer within sandbox \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230\""
Oct  2 19:47:26.876577 env[1632]: time="2023-10-02T19:47:26.876543028Z" level=info msg="StartContainer for \"3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230\""
Oct  2 19:47:26.899449 systemd[1]: Started cri-containerd-3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230.scope.
Oct  2 19:47:26.924678 systemd[1]: cri-containerd-3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230.scope: Deactivated successfully.
Oct  2 19:47:26.932067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230-rootfs.mount: Deactivated successfully.
Oct  2 19:47:26.942226 kubelet[2091]: I1002 19:47:26.942197    2091 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=638fe839-a332-4054-92a8-71460027ef59 path="/var/lib/kubelet/pods/638fe839-a332-4054-92a8-71460027ef59/volumes"
Oct  2 19:47:26.960163 env[1632]: time="2023-10-02T19:47:26.960105192Z" level=info msg="shim disconnected" id=3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230
Oct  2 19:47:26.960163 env[1632]: time="2023-10-02T19:47:26.960162546Z" level=warning msg="cleaning up after shim disconnected" id=3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230 namespace=k8s.io
Oct  2 19:47:26.960843 env[1632]: time="2023-10-02T19:47:26.960173537Z" level=info msg="cleaning up dead shim"
Oct  2 19:47:26.970359 env[1632]: time="2023-10-02T19:47:26.970300168Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2942 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:47:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:47:26.970680 env[1632]: time="2023-10-02T19:47:26.970602201Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed"
Oct  2 19:47:26.970935 env[1632]: time="2023-10-02T19:47:26.970899779Z" level=error msg="Failed to pipe stderr of container \"3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230\"" error="reading from a closed fifo"
Oct  2 19:47:26.974588 env[1632]: time="2023-10-02T19:47:26.974533126Z" level=error msg="Failed to pipe stdout of container \"3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230\"" error="reading from a closed fifo"
Oct  2 19:47:26.977065 env[1632]: time="2023-10-02T19:47:26.977018038Z" level=error msg="StartContainer for \"3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:47:26.977334 kubelet[2091]: E1002 19:47:26.977313    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230"
Oct  2 19:47:26.977689 kubelet[2091]: E1002 19:47:26.977668    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:47:26.977689 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:47:26.977689 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:47:26.977689 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-j8fwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-bgqvw_kube-system(96b8c903-a200-4c86-8aef-9fbe94ca5cc9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:47:26.977935 kubelet[2091]: E1002 19:47:26.977727    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bgqvw" podUID=96b8c903-a200-4c86-8aef-9fbe94ca5cc9
Oct  2 19:47:27.363045 env[1632]: time="2023-10-02T19:47:27.363003810Z" level=info msg="CreateContainer within sandbox \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}"
Oct  2 19:47:27.381995 env[1632]: time="2023-10-02T19:47:27.381942021Z" level=info msg="CreateContainer within sandbox \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37\""
Oct  2 19:47:27.382567 env[1632]: time="2023-10-02T19:47:27.382535239Z" level=info msg="StartContainer for \"93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37\""
Oct  2 19:47:27.407410 systemd[1]: Started cri-containerd-93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37.scope.
Oct  2 19:47:27.423083 systemd[1]: cri-containerd-93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37.scope: Deactivated successfully.
Oct  2 19:47:27.440972 env[1632]: time="2023-10-02T19:47:27.440910815Z" level=info msg="shim disconnected" id=93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37
Oct  2 19:47:27.440972 env[1632]: time="2023-10-02T19:47:27.440970982Z" level=warning msg="cleaning up after shim disconnected" id=93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37 namespace=k8s.io
Oct  2 19:47:27.441350 env[1632]: time="2023-10-02T19:47:27.440983144Z" level=info msg="cleaning up dead shim"
Oct  2 19:47:27.451648 env[1632]: time="2023-10-02T19:47:27.451599314Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2979 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:47:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:47:27.451955 env[1632]: time="2023-10-02T19:47:27.451890596Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed"
Oct  2 19:47:27.455736 env[1632]: time="2023-10-02T19:47:27.455668855Z" level=error msg="Failed to pipe stdout of container \"93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37\"" error="reading from a closed fifo"
Oct  2 19:47:27.457481 env[1632]: time="2023-10-02T19:47:27.457418320Z" level=error msg="Failed to pipe stderr of container \"93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37\"" error="reading from a closed fifo"
Oct  2 19:47:27.459481 env[1632]: time="2023-10-02T19:47:27.459429745Z" level=error msg="StartContainer for \"93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:47:27.459704 kubelet[2091]: E1002 19:47:27.459680    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37"
Oct  2 19:47:27.459821 kubelet[2091]: E1002 19:47:27.459803    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:47:27.459821 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:47:27.459821 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:47:27.459821 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-j8fwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-bgqvw_kube-system(96b8c903-a200-4c86-8aef-9fbe94ca5cc9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:47:27.460040 kubelet[2091]: E1002 19:47:27.459857    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bgqvw" podUID=96b8c903-a200-4c86-8aef-9fbe94ca5cc9
Oct  2 19:47:27.499621 kubelet[2091]: E1002 19:47:27.499580    2091 configmap.go:197] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found
Oct  2 19:47:27.499952 kubelet[2091]: E1002 19:47:27.499691    2091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-config-path podName:96b8c903-a200-4c86-8aef-9fbe94ca5cc9 nodeName:}" failed. No retries permitted until 2023-10-02 19:47:27.999664209 +0000 UTC m=+208.493754055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-config-path") pod "cilium-bgqvw" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9") : configmap "cilium-config" not found
Oct  2 19:47:27.767525 kubelet[2091]: E1002 19:47:27.767471    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:27.914894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37-rootfs.mount: Deactivated successfully.
Oct  2 19:47:28.097872 kubelet[2091]: E1002 19:47:28.097563    2091 configmap.go:197] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found
Oct  2 19:47:28.097872 kubelet[2091]: E1002 19:47:28.097646    2091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-config-path podName:96b8c903-a200-4c86-8aef-9fbe94ca5cc9 nodeName:}" failed. No retries permitted until 2023-10-02 19:47:29.097627981 +0000 UTC m=+209.591717833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-config-path") pod "cilium-bgqvw" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9") : configmap "cilium-config" not found
Oct  2 19:47:28.366116 kubelet[2091]: I1002 19:47:28.366003    2091 scope.go:115] "RemoveContainer" containerID="3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230"
Oct  2 19:47:28.366820 env[1632]: time="2023-10-02T19:47:28.366777903Z" level=info msg="StopPodSandbox for \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\""
Oct  2 19:47:28.369996 env[1632]: time="2023-10-02T19:47:28.366845281Z" level=info msg="Container to stop \"3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Oct  2 19:47:28.369996 env[1632]: time="2023-10-02T19:47:28.366863914Z" level=info msg="Container to stop \"93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Oct  2 19:47:28.368965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea-shm.mount: Deactivated successfully.
Oct  2 19:47:28.373474 env[1632]: time="2023-10-02T19:47:28.373435420Z" level=info msg="RemoveContainer for \"3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230\""
Oct  2 19:47:28.377437 env[1632]: time="2023-10-02T19:47:28.377395856Z" level=info msg="RemoveContainer for \"3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230\" returns successfully"
Oct  2 19:47:28.378000 audit: BPF prog-id=87 op=UNLOAD
Oct  2 19:47:28.378404 systemd[1]: cri-containerd-b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea.scope: Deactivated successfully.
Oct  2 19:47:28.383000 audit: BPF prog-id=90 op=UNLOAD
Oct  2 19:47:28.406229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea-rootfs.mount: Deactivated successfully.
Oct  2 19:47:28.429166 env[1632]: time="2023-10-02T19:47:28.428933531Z" level=info msg="shim disconnected" id=b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea
Oct  2 19:47:28.429692 env[1632]: time="2023-10-02T19:47:28.429173768Z" level=warning msg="cleaning up after shim disconnected" id=b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea namespace=k8s.io
Oct  2 19:47:28.429692 env[1632]: time="2023-10-02T19:47:28.429188391Z" level=info msg="cleaning up dead shim"
Oct  2 19:47:28.451937 env[1632]: time="2023-10-02T19:47:28.451881209Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3010 runtime=io.containerd.runc.v2\n"
Oct  2 19:47:28.452449 env[1632]: time="2023-10-02T19:47:28.452414807Z" level=info msg="TearDown network for sandbox \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" successfully"
Oct  2 19:47:28.452449 env[1632]: time="2023-10-02T19:47:28.452444481Z" level=info msg="StopPodSandbox for \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" returns successfully"
Oct  2 19:47:28.606266 kubelet[2091]: I1002 19:47:28.600603    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-bpf-maps\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.606266 kubelet[2091]: I1002 19:47:28.600660    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-host-proc-sys-kernel\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.606266 kubelet[2091]: I1002 19:47:28.600675    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.606266 kubelet[2091]: I1002 19:47:28.600707    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-config-path\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.606266 kubelet[2091]: I1002 19:47:28.600730    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.606076 systemd[1]: var-lib-kubelet-pods-96b8c903\x2da200\x2d4c86\x2d8aef\x2d9fbe94ca5cc9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Oct  2 19:47:28.606919 kubelet[2091]: I1002 19:47:28.600741    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-host-proc-sys-net\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.606919 kubelet[2091]: I1002 19:47:28.600769    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-run\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.606919 kubelet[2091]: I1002 19:47:28.600794    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-hostproc\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.606919 kubelet[2091]: I1002 19:47:28.600823    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-hubble-tls\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.606919 kubelet[2091]: I1002 19:47:28.600845    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cni-path\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.606919 kubelet[2091]: I1002 19:47:28.600877    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-clustermesh-secrets\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.607274 kubelet[2091]: I1002 19:47:28.600903    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-cgroup\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.607274 kubelet[2091]: I1002 19:47:28.600929    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-etc-cni-netd\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.607274 kubelet[2091]: I1002 19:47:28.600960    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8fwj\" (UniqueName: \"kubernetes.io/projected/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-kube-api-access-j8fwj\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.607274 kubelet[2091]: I1002 19:47:28.600984    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-xtables-lock\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.607274 kubelet[2091]: W1002 19:47:28.600972    2091 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/96b8c903-a200-4c86-8aef-9fbe94ca5cc9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Oct  2 19:47:28.607274 kubelet[2091]: I1002 19:47:28.601010    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-lib-modules\") pod \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\" (UID: \"96b8c903-a200-4c86-8aef-9fbe94ca5cc9\") "
Oct  2 19:47:28.607274 kubelet[2091]: I1002 19:47:28.601045    2091 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-bpf-maps\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.607597 kubelet[2091]: I1002 19:47:28.601062    2091 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-host-proc-sys-kernel\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.607597 kubelet[2091]: I1002 19:47:28.601084    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.607597 kubelet[2091]: I1002 19:47:28.601108    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.607597 kubelet[2091]: I1002 19:47:28.601129    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.607597 kubelet[2091]: I1002 19:47:28.601150    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-hostproc" (OuterVolumeSpecName: "hostproc") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.607866 kubelet[2091]: I1002 19:47:28.601778    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cni-path" (OuterVolumeSpecName: "cni-path") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.607866 kubelet[2091]: I1002 19:47:28.602095    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.607866 kubelet[2091]: I1002 19:47:28.602131    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.607866 kubelet[2091]: I1002 19:47:28.602370    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:47:28.607866 kubelet[2091]: I1002 19:47:28.607098    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Oct  2 19:47:28.609409 kubelet[2091]: I1002 19:47:28.609374    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct  2 19:47:28.612975 systemd[1]: var-lib-kubelet-pods-96b8c903\x2da200\x2d4c86\x2d8aef\x2d9fbe94ca5cc9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Oct  2 19:47:28.614268 kubelet[2091]: I1002 19:47:28.614237    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Oct  2 19:47:28.615662 kubelet[2091]: I1002 19:47:28.615632    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-kube-api-access-j8fwj" (OuterVolumeSpecName: "kube-api-access-j8fwj") pod "96b8c903-a200-4c86-8aef-9fbe94ca5cc9" (UID: "96b8c903-a200-4c86-8aef-9fbe94ca5cc9"). InnerVolumeSpecName "kube-api-access-j8fwj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct  2 19:47:28.702005 kubelet[2091]: I1002 19:47:28.701971    2091 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-run\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702005 kubelet[2091]: I1002 19:47:28.702010    2091 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-hostproc\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702235 kubelet[2091]: I1002 19:47:28.702028    2091 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-config-path\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702235 kubelet[2091]: I1002 19:47:28.702043    2091 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-host-proc-sys-net\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702235 kubelet[2091]: I1002 19:47:28.702056    2091 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-hubble-tls\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702235 kubelet[2091]: I1002 19:47:28.702068    2091 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cni-path\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702235 kubelet[2091]: I1002 19:47:28.702080    2091 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-clustermesh-secrets\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702235 kubelet[2091]: I1002 19:47:28.702092    2091 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-cilium-cgroup\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702235 kubelet[2091]: I1002 19:47:28.702105    2091 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-etc-cni-netd\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702235 kubelet[2091]: I1002 19:47:28.702117    2091 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-lib-modules\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702450 kubelet[2091]: I1002 19:47:28.702131    2091 reconciler.go:399] "Volume detached for volume \"kube-api-access-j8fwj\" (UniqueName: \"kubernetes.io/projected/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-kube-api-access-j8fwj\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.702450 kubelet[2091]: I1002 19:47:28.702144    2091 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96b8c903-a200-4c86-8aef-9fbe94ca5cc9-xtables-lock\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:47:28.768394 kubelet[2091]: E1002 19:47:28.768351    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:28.914827 systemd[1]: var-lib-kubelet-pods-96b8c903\x2da200\x2d4c86\x2d8aef\x2d9fbe94ca5cc9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj8fwj.mount: Deactivated successfully.
Oct  2 19:47:28.942865 systemd[1]: Removed slice kubepods-burstable-pod96b8c903_a200_4c86_8aef_9fbe94ca5cc9.slice.
Oct  2 19:47:29.369706 kubelet[2091]: I1002 19:47:29.369675    2091 scope.go:115] "RemoveContainer" containerID="93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37"
Oct  2 19:47:29.372036 env[1632]: time="2023-10-02T19:47:29.371964472Z" level=info msg="RemoveContainer for \"93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37\""
Oct  2 19:47:29.378812 env[1632]: time="2023-10-02T19:47:29.378756660Z" level=info msg="RemoveContainer for \"93bc543ee2329bd75318a4dc3fbe1b83cf6a55920206278ae9176ad7da596f37\" returns successfully"
Oct  2 19:47:29.768877 kubelet[2091]: E1002 19:47:29.768828    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:30.065420 kubelet[2091]: W1002 19:47:30.065285    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b8c903_a200_4c86_8aef_9fbe94ca5cc9.slice/cri-containerd-3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230.scope WatchSource:0}: container "3208999099c2adc21ae88129675e9105b8f810c0b0548b66c9421a9f53cf6230" in namespace "k8s.io": not found
Oct  2 19:47:30.435373 kubelet[2091]: I1002 19:47:30.435317    2091 topology_manager.go:205] "Topology Admit Handler"
Oct  2 19:47:30.435373 kubelet[2091]: E1002 19:47:30.435390    2091 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="96b8c903-a200-4c86-8aef-9fbe94ca5cc9" containerName="mount-cgroup"
Oct  2 19:47:30.435668 kubelet[2091]: E1002 19:47:30.435403    2091 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="96b8c903-a200-4c86-8aef-9fbe94ca5cc9" containerName="mount-cgroup"
Oct  2 19:47:30.435668 kubelet[2091]: I1002 19:47:30.435421    2091 memory_manager.go:345] "RemoveStaleState removing state" podUID="96b8c903-a200-4c86-8aef-9fbe94ca5cc9" containerName="mount-cgroup"
Oct  2 19:47:30.441872 systemd[1]: Created slice kubepods-besteffort-pod98d42e9c_39d3_4025_b857_144f217685bd.slice.
Oct  2 19:47:30.463757 kubelet[2091]: I1002 19:47:30.463716    2091 topology_manager.go:205] "Topology Admit Handler"
Oct  2 19:47:30.463938 kubelet[2091]: I1002 19:47:30.463817    2091 memory_manager.go:345] "RemoveStaleState removing state" podUID="96b8c903-a200-4c86-8aef-9fbe94ca5cc9" containerName="mount-cgroup"
Oct  2 19:47:30.470763 systemd[1]: Created slice kubepods-burstable-pod0035927f_7e03_42ca_864b_8e75e1ee8bae.slice.
Oct  2 19:47:30.471887 kubelet[2091]: W1002 19:47:30.471864    2091 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.22.191" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.191' and this object
Oct  2 19:47:30.472555 kubelet[2091]: E1002 19:47:30.472540    2091 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.22.191" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.191' and this object
Oct  2 19:47:30.473116 kubelet[2091]: W1002 19:47:30.472086    2091 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.22.191" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.191' and this object
Oct  2 19:47:30.473116 kubelet[2091]: E1002 19:47:30.472911    2091 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.22.191" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.22.191' and this object
Oct  2 19:47:30.511215 kubelet[2091]: I1002 19:47:30.511165    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98d42e9c-39d3-4025-b857-144f217685bd-cilium-config-path\") pod \"cilium-operator-69b677f97c-mqdz5\" (UID: \"98d42e9c-39d3-4025-b857-144f217685bd\") " pod="kube-system/cilium-operator-69b677f97c-mqdz5"
Oct  2 19:47:30.511215 kubelet[2091]: I1002 19:47:30.511222    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w9hw\" (UniqueName: \"kubernetes.io/projected/98d42e9c-39d3-4025-b857-144f217685bd-kube-api-access-9w9hw\") pod \"cilium-operator-69b677f97c-mqdz5\" (UID: \"98d42e9c-39d3-4025-b857-144f217685bd\") " pod="kube-system/cilium-operator-69b677f97c-mqdz5"
Oct  2 19:47:30.612388 kubelet[2091]: I1002 19:47:30.612349    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cni-path\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.612693 kubelet[2091]: I1002 19:47:30.612675    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-lib-modules\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.612974 kubelet[2091]: I1002 19:47:30.612958    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-run\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613116 kubelet[2091]: I1002 19:47:30.613095    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-cgroup\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613204 kubelet[2091]: I1002 19:47:30.613134    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-etc-cni-netd\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613204 kubelet[2091]: I1002 19:47:30.613164    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-hubble-tls\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613204 kubelet[2091]: I1002 19:47:30.613197    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-445kg\" (UniqueName: \"kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-kube-api-access-445kg\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613351 kubelet[2091]: I1002 19:47:30.613229    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-bpf-maps\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613351 kubelet[2091]: I1002 19:47:30.613261    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-host-proc-sys-net\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613351 kubelet[2091]: I1002 19:47:30.613294    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-ipsec-secrets\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613351 kubelet[2091]: I1002 19:47:30.613327    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-host-proc-sys-kernel\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613552 kubelet[2091]: I1002 19:47:30.613378    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-hostproc\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613552 kubelet[2091]: I1002 19:47:30.613416    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-xtables-lock\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613552 kubelet[2091]: I1002 19:47:30.613448    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0035927f-7e03-42ca-864b-8e75e1ee8bae-clustermesh-secrets\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.613552 kubelet[2091]: I1002 19:47:30.613480    2091 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-config-path\") pod \"cilium-w9w67\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") " pod="kube-system/cilium-w9w67"
Oct  2 19:47:30.746830 env[1632]: time="2023-10-02T19:47:30.745740625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-mqdz5,Uid:98d42e9c-39d3-4025-b857-144f217685bd,Namespace:kube-system,Attempt:0,}"
Oct  2 19:47:30.769533 env[1632]: time="2023-10-02T19:47:30.769202743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct  2 19:47:30.769533 env[1632]: time="2023-10-02T19:47:30.769313007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct  2 19:47:30.769533 env[1632]: time="2023-10-02T19:47:30.769336415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct  2 19:47:30.769758 kubelet[2091]: E1002 19:47:30.769709    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:30.770496 env[1632]: time="2023-10-02T19:47:30.770419811Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d pid=3036 runtime=io.containerd.runc.v2
Oct  2 19:47:30.794558 systemd[1]: Started cri-containerd-4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d.scope.
Oct  2 19:47:30.811000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.811000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.811000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.811000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.811000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.811000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.811000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.811000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.811000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit: BPF prog-id=91 op=LOAD
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=3036 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:30.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326135343033333963333638623735343538313635383066326438
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=3036 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:30.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326135343033333963333638623735343538313635383066326438
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit: BPF prog-id=92 op=LOAD
Oct  2 19:47:30.812000 audit[3045]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c000024cc0 items=0 ppid=3036 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:30.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326135343033333963333638623735343538313635383066326438
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.812000 audit: BPF prog-id=93 op=LOAD
Oct  2 19:47:30.812000 audit[3045]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c000024d08 items=0 ppid=3036 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:30.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326135343033333963333638623735343538313635383066326438
Oct  2 19:47:30.812000 audit: BPF prog-id=93 op=UNLOAD
Oct  2 19:47:30.812000 audit: BPF prog-id=92 op=UNLOAD
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { perfmon } for  pid=3045 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit[3045]: AVC avc:  denied  { bpf } for  pid=3045 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:30.813000 audit: BPF prog-id=94 op=LOAD
Oct  2 19:47:30.813000 audit[3045]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c000025118 items=0 ppid=3036 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:30.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465326135343033333963333638623735343538313635383066326438
Oct  2 19:47:30.845941 kubelet[2091]: E1002 19:47:30.845865    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:30.877674 env[1632]: time="2023-10-02T19:47:30.877589057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-mqdz5,Uid:98d42e9c-39d3-4025-b857-144f217685bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\""
Oct  2 19:47:30.880833 env[1632]: time="2023-10-02T19:47:30.880748947Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\""
Oct  2 19:47:30.940670 kubelet[2091]: I1002 19:47:30.940638    2091 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=96b8c903-a200-4c86-8aef-9fbe94ca5cc9 path="/var/lib/kubelet/pods/96b8c903-a200-4c86-8aef-9fbe94ca5cc9/volumes"
Oct  2 19:47:31.715353 kubelet[2091]: E1002 19:47:31.715282    2091 projected.go:265] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition
Oct  2 19:47:31.715353 kubelet[2091]: E1002 19:47:31.715318    2091 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-w9w67: failed to sync secret cache: timed out waiting for the condition
Oct  2 19:47:31.715898 kubelet[2091]: E1002 19:47:31.715470    2091 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-hubble-tls podName:0035927f-7e03-42ca-864b-8e75e1ee8bae nodeName:}" failed. No retries permitted until 2023-10-02 19:47:32.215445837 +0000 UTC m=+212.709535680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-hubble-tls") pod "cilium-w9w67" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae") : failed to sync secret cache: timed out waiting for the condition
Oct  2 19:47:31.770584 kubelet[2091]: E1002 19:47:31.770513    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:32.197833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349494033.mount: Deactivated successfully.
Oct  2 19:47:32.283342 env[1632]: time="2023-10-02T19:47:32.283288990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9w67,Uid:0035927f-7e03-42ca-864b-8e75e1ee8bae,Namespace:kube-system,Attempt:0,}"
Oct  2 19:47:32.340018 env[1632]: time="2023-10-02T19:47:32.334193929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct  2 19:47:32.340018 env[1632]: time="2023-10-02T19:47:32.334240576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct  2 19:47:32.340018 env[1632]: time="2023-10-02T19:47:32.334257437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct  2 19:47:32.340018 env[1632]: time="2023-10-02T19:47:32.334420205Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8 pid=3082 runtime=io.containerd.runc.v2
Oct  2 19:47:32.377701 systemd[1]: Started cri-containerd-fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8.scope.
Oct  2 19:47:32.412390 kernel: kauditd_printk_skb: 108 callbacks suppressed
Oct  2 19:47:32.412566 kernel: audit: type=1400 audit(1696276052.402:775): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412609 kernel: audit: type=1400 audit(1696276052.402:776): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.402000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.402000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.402000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.402000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.424196 kernel: audit: type=1400 audit(1696276052.402:777): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.424291 kernel: audit: type=1400 audit(1696276052.402:778): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.402000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.430571 kernel: audit: type=1400 audit(1696276052.402:779): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.402000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.438120 kernel: audit: type=1400 audit(1696276052.402:780): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.438225 kernel: audit: type=1400 audit(1696276052.402:781): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.402000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.402000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.442635 env[1632]: time="2023-10-02T19:47:32.442596110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9w67,Uid:0035927f-7e03-42ca-864b-8e75e1ee8bae,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\""
Oct  2 19:47:32.445947 kernel: audit: type=1400 audit(1696276052.402:782): avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.446042 env[1632]: time="2023-10-02T19:47:32.445896399Z" level=info msg="CreateContainer within sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Oct  2 19:47:32.402000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.455523 kernel: audit: type=1400 audit(1696276052.402:783): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.455617 kernel: audit: type=1400 audit(1696276052.412:784): avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit: BPF prog-id=95 op=LOAD
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=3082 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:32.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661396433383762396231336430613761303339336130633965386135
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00014d6b0 a2=3c a3=c items=0 ppid=3082 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:32.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661396433383762396231336430613761303339336130633965386135
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit: BPF prog-id=96 op=LOAD
Oct  2 19:47:32.412000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014d9d8 a2=78 a3=c0002b0450 items=0 ppid=3082 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:32.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661396433383762396231336430613761303339336130633965386135
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit: BPF prog-id=97 op=LOAD
Oct  2 19:47:32.412000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00014d770 a2=78 a3=c0002b0498 items=0 ppid=3082 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:32.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661396433383762396231336430613761303339336130633965386135
Oct  2 19:47:32.412000 audit: BPF prog-id=97 op=UNLOAD
Oct  2 19:47:32.412000 audit: BPF prog-id=96 op=UNLOAD
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { perfmon } for  pid=3091 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit[3091]: AVC avc:  denied  { bpf } for  pid=3091 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:32.412000 audit: BPF prog-id=98 op=LOAD
Oct  2 19:47:32.412000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014dc30 a2=78 a3=c0002b08a8 items=0 ppid=3082 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:32.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661396433383762396231336430613761303339336130633965386135
Oct  2 19:47:32.499160 env[1632]: time="2023-10-02T19:47:32.499087193Z" level=info msg="CreateContainer within sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\""
Oct  2 19:47:32.499980 env[1632]: time="2023-10-02T19:47:32.499948331Z" level=info msg="StartContainer for \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\""
Oct  2 19:47:32.534870 systemd[1]: Started cri-containerd-5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f.scope.
Oct  2 19:47:32.553960 systemd[1]: cri-containerd-5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f.scope: Deactivated successfully.
Oct  2 19:47:32.606259 env[1632]: time="2023-10-02T19:47:32.606196696Z" level=info msg="shim disconnected" id=5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f
Oct  2 19:47:32.606259 env[1632]: time="2023-10-02T19:47:32.606257738Z" level=warning msg="cleaning up after shim disconnected" id=5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f namespace=k8s.io
Oct  2 19:47:32.606594 env[1632]: time="2023-10-02T19:47:32.606270288Z" level=info msg="cleaning up dead shim"
Oct  2 19:47:32.634102 env[1632]: time="2023-10-02T19:47:32.634046051Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3142 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:47:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:47:32.634412 env[1632]: time="2023-10-02T19:47:32.634345175Z" level=error msg="copy shim log" error="read /proc/self/fd/48: file already closed"
Oct  2 19:47:32.635630 env[1632]: time="2023-10-02T19:47:32.635578926Z" level=error msg="Failed to pipe stdout of container \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\"" error="reading from a closed fifo"
Oct  2 19:47:32.635828 env[1632]: time="2023-10-02T19:47:32.635792753Z" level=error msg="Failed to pipe stderr of container \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\"" error="reading from a closed fifo"
Oct  2 19:47:32.637856 env[1632]: time="2023-10-02T19:47:32.637788476Z" level=error msg="StartContainer for \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:47:32.638261 kubelet[2091]: E1002 19:47:32.638235    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f"
Oct  2 19:47:32.638402 kubelet[2091]: E1002 19:47:32.638369    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:47:32.638402 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:47:32.638402 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:47:32.638402 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-445kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:47:32.638760 kubelet[2091]: E1002 19:47:32.638420    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:47:32.771469 kubelet[2091]: E1002 19:47:32.771321    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:33.221496 env[1632]: time="2023-10-02T19:47:33.221348518Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:47:33.224602 env[1632]: time="2023-10-02T19:47:33.224556345Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:47:33.226712 env[1632]: time="2023-10-02T19:47:33.226670756Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Oct  2 19:47:33.227294 env[1632]: time="2023-10-02T19:47:33.227257590Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\""
Oct  2 19:47:33.229774 env[1632]: time="2023-10-02T19:47:33.229739951Z" level=info msg="CreateContainer within sandbox \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Oct  2 19:47:33.250442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount814816016.mount: Deactivated successfully.
Oct  2 19:47:33.261673 env[1632]: time="2023-10-02T19:47:33.261333019Z" level=info msg="CreateContainer within sandbox \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\""
Oct  2 19:47:33.267872 env[1632]: time="2023-10-02T19:47:33.267688651Z" level=info msg="StartContainer for \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\""
Oct  2 19:47:33.312350 systemd[1]: Started cri-containerd-4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd.scope.
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.334000 audit: BPF prog-id=99 op=LOAD
Oct  2 19:47:33.335000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.335000 audit[3163]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000149c48 a2=10 a3=1c items=0 ppid=3036 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:33.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383433626439353665323632346231306135376236666166323466
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001496b0 a2=3c a3=8 items=0 ppid=3036 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:33.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383433626439353665323632346231306135376236666166323466
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit: BPF prog-id=100 op=LOAD
Oct  2 19:47:33.336000 audit[3163]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001499d8 a2=78 a3=c0001ef470 items=0 ppid=3036 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:33.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383433626439353665323632346231306135376236666166323466
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.336000 audit: BPF prog-id=101 op=LOAD
Oct  2 19:47:33.336000 audit[3163]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000149770 a2=78 a3=c0001ef4b8 items=0 ppid=3036 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:33.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383433626439353665323632346231306135376236666166323466
Oct  2 19:47:33.337000 audit: BPF prog-id=101 op=UNLOAD
Oct  2 19:47:33.337000 audit: BPF prog-id=100 op=UNLOAD
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { perfmon } for  pid=3163 comm="runc" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit[3163]: AVC avc:  denied  { bpf } for  pid=3163 comm="runc" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0
Oct  2 19:47:33.337000 audit: BPF prog-id=102 op=LOAD
Oct  2 19:47:33.337000 audit[3163]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000149c30 a2=78 a3=c0001ef8c8 items=0 ppid=3036 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null)
Oct  2 19:47:33.337000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461383433626439353665323632346231306135376236666166323466
Oct  2 19:47:33.363804 env[1632]: time="2023-10-02T19:47:33.363740023Z" level=info msg="StartContainer for \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\" returns successfully"
Oct  2 19:47:33.391000 audit[3174]: AVC avc:  denied  { map_create } for  pid=3174 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c278,c540 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c278,c540 tclass=bpf permissive=0
Oct  2 19:47:33.391000 audit[3174]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0005cb7d0 a2=48 a3=c0005cb7c0 items=0 ppid=3036 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c278,c540 key=(null)
Oct  2 19:47:33.391000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365
Oct  2 19:47:33.414304 env[1632]: time="2023-10-02T19:47:33.414208292Z" level=info msg="CreateContainer within sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}"
Oct  2 19:47:33.440964 env[1632]: time="2023-10-02T19:47:33.440926130Z" level=info msg="CreateContainer within sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\""
Oct  2 19:47:33.442225 env[1632]: time="2023-10-02T19:47:33.442189587Z" level=info msg="StartContainer for \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\""
Oct  2 19:47:33.488465 systemd[1]: Started cri-containerd-fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd.scope.
Oct  2 19:47:33.524653 systemd[1]: cri-containerd-fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd.scope: Deactivated successfully.
Oct  2 19:47:33.743562 env[1632]: time="2023-10-02T19:47:33.743416131Z" level=info msg="shim disconnected" id=fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd
Oct  2 19:47:33.743562 env[1632]: time="2023-10-02T19:47:33.743477323Z" level=warning msg="cleaning up after shim disconnected" id=fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd namespace=k8s.io
Oct  2 19:47:33.743562 env[1632]: time="2023-10-02T19:47:33.743507507Z" level=info msg="cleaning up dead shim"
Oct  2 19:47:33.753046 env[1632]: time="2023-10-02T19:47:33.752978709Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3219 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:47:33Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:47:33.753631 env[1632]: time="2023-10-02T19:47:33.753568430Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed"
Oct  2 19:47:33.753891 env[1632]: time="2023-10-02T19:47:33.753841899Z" level=error msg="Failed to pipe stdout of container \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\"" error="reading from a closed fifo"
Oct  2 19:47:33.754531 env[1632]: time="2023-10-02T19:47:33.754456677Z" level=error msg="Failed to pipe stderr of container \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\"" error="reading from a closed fifo"
Oct  2 19:47:33.757338 env[1632]: time="2023-10-02T19:47:33.757233693Z" level=error msg="StartContainer for \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:47:33.757752 kubelet[2091]: E1002 19:47:33.757733    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd"
Oct  2 19:47:33.757941 kubelet[2091]: E1002 19:47:33.757865    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:47:33.757941 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:47:33.757941 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:47:33.757941 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-445kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:47:33.758167 kubelet[2091]: E1002 19:47:33.757927    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:47:33.772143 kubelet[2091]: E1002 19:47:33.772108    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:34.407608 kubelet[2091]: I1002 19:47:34.407260    2091 scope.go:115] "RemoveContainer" containerID="5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f"
Oct  2 19:47:34.408128 kubelet[2091]: I1002 19:47:34.407974    2091 scope.go:115] "RemoveContainer" containerID="5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f"
Oct  2 19:47:34.409328 env[1632]: time="2023-10-02T19:47:34.409290728Z" level=info msg="RemoveContainer for \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\""
Oct  2 19:47:34.410240 env[1632]: time="2023-10-02T19:47:34.409554354Z" level=info msg="RemoveContainer for \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\""
Oct  2 19:47:34.410347 env[1632]: time="2023-10-02T19:47:34.410309031Z" level=error msg="RemoveContainer for \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\" failed" error="failed to set removing state for container \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\": container is already in removing state"
Oct  2 19:47:34.410533 kubelet[2091]: E1002 19:47:34.410511    2091 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\": container is already in removing state" containerID="5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f"
Oct  2 19:47:34.410632 kubelet[2091]: E1002 19:47:34.410551    2091 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f": container is already in removing state; Skipping pod "cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae)"
Oct  2 19:47:34.410939 kubelet[2091]: E1002 19:47:34.410916    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae)\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:47:34.416172 env[1632]: time="2023-10-02T19:47:34.416129159Z" level=info msg="RemoveContainer for \"5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f\" returns successfully"
Oct  2 19:47:34.772780 kubelet[2091]: E1002 19:47:34.772661    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:35.411168 kubelet[2091]: E1002 19:47:35.411128    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae)\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:47:35.719301 kubelet[2091]: W1002 19:47:35.718963    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0035927f_7e03_42ca_864b_8e75e1ee8bae.slice/cri-containerd-5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f.scope WatchSource:0}: container "5de3917a66bc869dc82844bc43268c5ba7403ce3ff7ab4b4ac604f790c96130f" in namespace "k8s.io": not found
Oct  2 19:47:35.773278 kubelet[2091]: E1002 19:47:35.773227    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:35.851507 kubelet[2091]: E1002 19:47:35.847228    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:36.773405 kubelet[2091]: E1002 19:47:36.773341    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:37.773609 kubelet[2091]: E1002 19:47:37.773562    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:38.774695 kubelet[2091]: E1002 19:47:38.774642    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:38.833882 kubelet[2091]: W1002 19:47:38.833829    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0035927f_7e03_42ca_864b_8e75e1ee8bae.slice/cri-containerd-fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd.scope WatchSource:0}: task fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd not found: not found
Oct  2 19:47:39.774977 kubelet[2091]: E1002 19:47:39.774920    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:40.620992 kubelet[2091]: E1002 19:47:40.620940    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:40.775838 kubelet[2091]: E1002 19:47:40.775783    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:40.848661 kubelet[2091]: E1002 19:47:40.848630    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:41.775960 kubelet[2091]: E1002 19:47:41.775902    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:42.776908 kubelet[2091]: E1002 19:47:42.776853    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:43.777713 kubelet[2091]: E1002 19:47:43.777658    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:44.778833 kubelet[2091]: E1002 19:47:44.778777    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:45.779005 kubelet[2091]: E1002 19:47:45.778960    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:45.850107 kubelet[2091]: E1002 19:47:45.850067    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:46.779557 kubelet[2091]: E1002 19:47:46.779511    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:47.780395 kubelet[2091]: E1002 19:47:47.780338    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:47.941595 env[1632]: time="2023-10-02T19:47:47.941555403Z" level=info msg="CreateContainer within sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}"
Oct  2 19:47:47.972429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount502143315.mount: Deactivated successfully.
Oct  2 19:47:47.982876 env[1632]: time="2023-10-02T19:47:47.982824116Z" level=info msg="CreateContainer within sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\""
Oct  2 19:47:47.987408 env[1632]: time="2023-10-02T19:47:47.987070770Z" level=info msg="StartContainer for \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\""
Oct  2 19:47:48.033427 systemd[1]: Started cri-containerd-22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99.scope.
Oct  2 19:47:48.060010 systemd[1]: cri-containerd-22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99.scope: Deactivated successfully.
Oct  2 19:47:48.083680 env[1632]: time="2023-10-02T19:47:48.083627949Z" level=info msg="shim disconnected" id=22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99
Oct  2 19:47:48.083991 env[1632]: time="2023-10-02T19:47:48.083956250Z" level=warning msg="cleaning up after shim disconnected" id=22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99 namespace=k8s.io
Oct  2 19:47:48.083991 env[1632]: time="2023-10-02T19:47:48.083977044Z" level=info msg="cleaning up dead shim"
Oct  2 19:47:48.093401 env[1632]: time="2023-10-02T19:47:48.093339688Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3256 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:47:48Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:47:48.093713 env[1632]: time="2023-10-02T19:47:48.093647398Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed"
Oct  2 19:47:48.093996 env[1632]: time="2023-10-02T19:47:48.093951096Z" level=error msg="Failed to pipe stderr of container \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\"" error="reading from a closed fifo"
Oct  2 19:47:48.100266 env[1632]: time="2023-10-02T19:47:48.100183169Z" level=error msg="Failed to pipe stdout of container \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\"" error="reading from a closed fifo"
Oct  2 19:47:48.102389 env[1632]: time="2023-10-02T19:47:48.102331251Z" level=error msg="StartContainer for \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:47:48.102658 kubelet[2091]: E1002 19:47:48.102633    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99"
Oct  2 19:47:48.102855 kubelet[2091]: E1002 19:47:48.102760    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:47:48.102855 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:47:48.102855 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:47:48.102855 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-445kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:47:48.103098 kubelet[2091]: E1002 19:47:48.102811    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:47:48.445571 kubelet[2091]: I1002 19:47:48.445543    2091 scope.go:115] "RemoveContainer" containerID="fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd"
Oct  2 19:47:48.446040 kubelet[2091]: I1002 19:47:48.446015    2091 scope.go:115] "RemoveContainer" containerID="fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd"
Oct  2 19:47:48.447903 env[1632]: time="2023-10-02T19:47:48.447852719Z" level=info msg="RemoveContainer for \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\""
Oct  2 19:47:48.448128 env[1632]: time="2023-10-02T19:47:48.448099504Z" level=info msg="RemoveContainer for \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\""
Oct  2 19:47:48.448419 env[1632]: time="2023-10-02T19:47:48.448376359Z" level=error msg="RemoveContainer for \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\" failed" error="failed to set removing state for container \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\": container is already in removing state"
Oct  2 19:47:48.448558 kubelet[2091]: E1002 19:47:48.448539    2091 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\": container is already in removing state" containerID="fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd"
Oct  2 19:47:48.448645 kubelet[2091]: E1002 19:47:48.448578    2091 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd": container is already in removing state; Skipping pod "cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae)"
Oct  2 19:47:48.448955 kubelet[2091]: E1002 19:47:48.448924    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae)\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:47:48.452090 env[1632]: time="2023-10-02T19:47:48.452054957Z" level=info msg="RemoveContainer for \"fea895367432286eb43d2dd650e1a0927402401699448397f18f075d72963dfd\" returns successfully"
Oct  2 19:47:48.780665 kubelet[2091]: E1002 19:47:48.780547    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:48.952247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99-rootfs.mount: Deactivated successfully.
Oct  2 19:47:49.781112 kubelet[2091]: E1002 19:47:49.781064    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:50.781882 kubelet[2091]: E1002 19:47:50.781828    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:50.851058 kubelet[2091]: E1002 19:47:50.851031    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:51.190405 kubelet[2091]: W1002 19:47:51.190331    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0035927f_7e03_42ca_864b_8e75e1ee8bae.slice/cri-containerd-22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99.scope WatchSource:0}: task 22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99 not found: not found
Oct  2 19:47:51.782867 kubelet[2091]: E1002 19:47:51.782812    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:52.782987 kubelet[2091]: E1002 19:47:52.782940    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:53.783151 kubelet[2091]: E1002 19:47:53.783102    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:54.784044 kubelet[2091]: E1002 19:47:54.783991    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:55.785065 kubelet[2091]: E1002 19:47:55.785010    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:55.852730 kubelet[2091]: E1002 19:47:55.852686    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:47:56.785855 kubelet[2091]: E1002 19:47:56.785807    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:57.786967 kubelet[2091]: E1002 19:47:57.786915    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:58.787094 kubelet[2091]: E1002 19:47:58.787036    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:47:59.787995 kubelet[2091]: E1002 19:47:59.787943    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:00.620295 kubelet[2091]: E1002 19:48:00.620245    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:00.642080 env[1632]: time="2023-10-02T19:48:00.642035250Z" level=info msg="StopPodSandbox for \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\""
Oct  2 19:48:00.642578 env[1632]: time="2023-10-02T19:48:00.642136798Z" level=info msg="TearDown network for sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" successfully"
Oct  2 19:48:00.642578 env[1632]: time="2023-10-02T19:48:00.642180942Z" level=info msg="StopPodSandbox for \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" returns successfully"
Oct  2 19:48:00.649771 env[1632]: time="2023-10-02T19:48:00.649735121Z" level=info msg="RemovePodSandbox for \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\""
Oct  2 19:48:00.649933 env[1632]: time="2023-10-02T19:48:00.649777189Z" level=info msg="Forcibly stopping sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\""
Oct  2 19:48:00.649933 env[1632]: time="2023-10-02T19:48:00.649869684Z" level=info msg="TearDown network for sandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" successfully"
Oct  2 19:48:00.658817 env[1632]: time="2023-10-02T19:48:00.658763848Z" level=info msg="RemovePodSandbox \"1f75ee9b18004320a0c405527b5dea4fbabf21e5c99dbc9e3721c8656b258efe\" returns successfully"
Oct  2 19:48:00.660225 env[1632]: time="2023-10-02T19:48:00.660184110Z" level=info msg="StopPodSandbox for \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\""
Oct  2 19:48:00.660514 env[1632]: time="2023-10-02T19:48:00.660414594Z" level=info msg="TearDown network for sandbox \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" successfully"
Oct  2 19:48:00.660514 env[1632]: time="2023-10-02T19:48:00.660470256Z" level=info msg="StopPodSandbox for \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" returns successfully"
Oct  2 19:48:00.661438 env[1632]: time="2023-10-02T19:48:00.661371699Z" level=info msg="RemovePodSandbox for \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\""
Oct  2 19:48:00.661553 env[1632]: time="2023-10-02T19:48:00.661436272Z" level=info msg="Forcibly stopping sandbox \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\""
Oct  2 19:48:00.661698 env[1632]: time="2023-10-02T19:48:00.661545744Z" level=info msg="TearDown network for sandbox \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" successfully"
Oct  2 19:48:00.668029 env[1632]: time="2023-10-02T19:48:00.667977618Z" level=info msg="RemovePodSandbox \"b82209500233d96bfd677ac2cc72c1f755c49268af21a2f4f898937b02c8feea\" returns successfully"
Oct  2 19:48:00.788772 kubelet[2091]: E1002 19:48:00.788735    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:00.853958 kubelet[2091]: E1002 19:48:00.853931    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:01.789360 kubelet[2091]: E1002 19:48:01.789309    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:02.789584 kubelet[2091]: E1002 19:48:02.789444    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:02.938268 kubelet[2091]: E1002 19:48:02.938227    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae)\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:48:03.790392 kubelet[2091]: E1002 19:48:03.790341    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:04.791281 kubelet[2091]: E1002 19:48:04.791225    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:05.792113 kubelet[2091]: E1002 19:48:05.792057    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:05.854991 kubelet[2091]: E1002 19:48:05.854958    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:06.792909 kubelet[2091]: E1002 19:48:06.792857    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:07.793739 kubelet[2091]: E1002 19:48:07.793687    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:08.794291 kubelet[2091]: E1002 19:48:08.794234    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:09.794575 kubelet[2091]: E1002 19:48:09.794524    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:10.795366 kubelet[2091]: E1002 19:48:10.795322    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:10.856005 kubelet[2091]: E1002 19:48:10.855967    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:11.795781 kubelet[2091]: E1002 19:48:11.795740    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:12.796757 kubelet[2091]: E1002 19:48:12.796656    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:13.797376 kubelet[2091]: E1002 19:48:13.797319    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:13.941296 env[1632]: time="2023-10-02T19:48:13.941244046Z" level=info msg="CreateContainer within sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}"
Oct  2 19:48:13.976185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118101418.mount: Deactivated successfully.
Oct  2 19:48:13.982142 env[1632]: time="2023-10-02T19:48:13.982094802Z" level=info msg="CreateContainer within sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab\""
Oct  2 19:48:13.984901 env[1632]: time="2023-10-02T19:48:13.984625923Z" level=info msg="StartContainer for \"f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab\""
Oct  2 19:48:14.022186 systemd[1]: Started cri-containerd-f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab.scope.
Oct  2 19:48:14.048469 systemd[1]: cri-containerd-f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab.scope: Deactivated successfully.
Oct  2 19:48:14.068649 env[1632]: time="2023-10-02T19:48:14.068579569Z" level=info msg="shim disconnected" id=f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab
Oct  2 19:48:14.068649 env[1632]: time="2023-10-02T19:48:14.068635886Z" level=warning msg="cleaning up after shim disconnected" id=f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab namespace=k8s.io
Oct  2 19:48:14.068649 env[1632]: time="2023-10-02T19:48:14.068648689Z" level=info msg="cleaning up dead shim"
Oct  2 19:48:14.079331 env[1632]: time="2023-10-02T19:48:14.079272377Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:48:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3296 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:48:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Oct  2 19:48:14.079735 env[1632]: time="2023-10-02T19:48:14.079671479Z" level=error msg="copy shim log" error="read /proc/self/fd/49: file already closed"
Oct  2 19:48:14.083603 env[1632]: time="2023-10-02T19:48:14.083533959Z" level=error msg="Failed to pipe stdout of container \"f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab\"" error="reading from a closed fifo"
Oct  2 19:48:14.083814 env[1632]: time="2023-10-02T19:48:14.083767096Z" level=error msg="Failed to pipe stderr of container \"f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab\"" error="reading from a closed fifo"
Oct  2 19:48:14.086131 env[1632]: time="2023-10-02T19:48:14.086080696Z" level=error msg="StartContainer for \"f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Oct  2 19:48:14.086662 kubelet[2091]: E1002 19:48:14.086627    2091 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab"
Oct  2 19:48:14.087379 kubelet[2091]: E1002 19:48:14.087358    2091 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Oct  2 19:48:14.087379 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Oct  2 19:48:14.087379 kubelet[2091]: rm /hostbin/cilium-mount
Oct  2 19:48:14.087379 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-445kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Oct  2 19:48:14.087990 kubelet[2091]: E1002 19:48:14.087432    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:48:14.502592 kubelet[2091]: I1002 19:48:14.501547    2091 scope.go:115] "RemoveContainer" containerID="22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99"
Oct  2 19:48:14.502592 kubelet[2091]: I1002 19:48:14.502345    2091 scope.go:115] "RemoveContainer" containerID="22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99"
Oct  2 19:48:14.502822 env[1632]: time="2023-10-02T19:48:14.502710852Z" level=info msg="RemoveContainer for \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\""
Oct  2 19:48:14.503704 env[1632]: time="2023-10-02T19:48:14.503654761Z" level=info msg="RemoveContainer for \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\""
Oct  2 19:48:14.503829 env[1632]: time="2023-10-02T19:48:14.503748855Z" level=error msg="RemoveContainer for \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\" failed" error="failed to set removing state for container \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\": container is already in removing state"
Oct  2 19:48:14.503904 kubelet[2091]: E1002 19:48:14.503885    2091 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\": container is already in removing state" containerID="22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99"
Oct  2 19:48:14.503982 kubelet[2091]: E1002 19:48:14.503923    2091 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99": container is already in removing state; Skipping pod "cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae)"
Oct  2 19:48:14.504212 kubelet[2091]: E1002 19:48:14.504195    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae)\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:48:14.508249 env[1632]: time="2023-10-02T19:48:14.508208906Z" level=info msg="RemoveContainer for \"22e18496c14fa051c9049844293b577d04331c99dfb474bbede2cfbad5461f99\" returns successfully"
Oct  2 19:48:14.797625 kubelet[2091]: E1002 19:48:14.797500    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:14.956225 systemd[1]: run-containerd-runc-k8s.io-f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab-runc.PDgj8H.mount: Deactivated successfully.
Oct  2 19:48:14.956357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab-rootfs.mount: Deactivated successfully.
Oct  2 19:48:15.798608 kubelet[2091]: E1002 19:48:15.798567    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:15.856588 kubelet[2091]: E1002 19:48:15.856559    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:16.799373 kubelet[2091]: E1002 19:48:16.799317    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:17.173307 kubelet[2091]: W1002 19:48:17.173265    2091 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0035927f_7e03_42ca_864b_8e75e1ee8bae.slice/cri-containerd-f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab.scope WatchSource:0}: task f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab not found: not found
Oct  2 19:48:17.800385 kubelet[2091]: E1002 19:48:17.800334    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:18.801575 kubelet[2091]: E1002 19:48:18.801468    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:19.802314 kubelet[2091]: E1002 19:48:19.802261    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:20.621010 kubelet[2091]: E1002 19:48:20.620970    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:20.803328 kubelet[2091]: E1002 19:48:20.803279    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:20.858092 kubelet[2091]: E1002 19:48:20.858064    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:21.803441 kubelet[2091]: E1002 19:48:21.803381    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:22.803568 kubelet[2091]: E1002 19:48:22.803526    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:23.804773 kubelet[2091]: E1002 19:48:23.804703    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:24.805717 kubelet[2091]: E1002 19:48:24.805671    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:25.806438 kubelet[2091]: E1002 19:48:25.806384    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:25.860959 kubelet[2091]: E1002 19:48:25.860706    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:25.938874 kubelet[2091]: E1002 19:48:25.938841    2091 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w9w67_kube-system(0035927f-7e03-42ca-864b-8e75e1ee8bae)\"" pod="kube-system/cilium-w9w67" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae
Oct  2 19:48:26.806962 kubelet[2091]: E1002 19:48:26.806911    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:27.807326 kubelet[2091]: E1002 19:48:27.807266    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:28.808067 kubelet[2091]: E1002 19:48:28.808020    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:29.808962 kubelet[2091]: E1002 19:48:29.808911    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:30.809096 kubelet[2091]: E1002 19:48:30.809052    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:30.862034 kubelet[2091]: E1002 19:48:30.861931    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:31.809646 kubelet[2091]: E1002 19:48:31.809564    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:32.223402 env[1632]: time="2023-10-02T19:48:32.223363040Z" level=info msg="StopPodSandbox for \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\""
Oct  2 19:48:32.226546 env[1632]: time="2023-10-02T19:48:32.223432408Z" level=info msg="Container to stop \"f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Oct  2 19:48:32.225969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8-shm.mount: Deactivated successfully.
Oct  2 19:48:32.234988 systemd[1]: cri-containerd-fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8.scope: Deactivated successfully.
Oct  2 19:48:32.238910 kernel: kauditd_printk_skb: 107 callbacks suppressed
Oct  2 19:48:32.239041 kernel: audit: type=1334 audit(1696276112.233:812): prog-id=95 op=UNLOAD
Oct  2 19:48:32.233000 audit: BPF prog-id=95 op=UNLOAD
Oct  2 19:48:32.239000 audit: BPF prog-id=98 op=UNLOAD
Oct  2 19:48:32.243537 kernel: audit: type=1334 audit(1696276112.239:813): prog-id=98 op=UNLOAD
Oct  2 19:48:32.263213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8-rootfs.mount: Deactivated successfully.
Oct  2 19:48:32.267009 env[1632]: time="2023-10-02T19:48:32.266955912Z" level=info msg="StopContainer for \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\" with timeout 30 (s)"
Oct  2 19:48:32.267386 env[1632]: time="2023-10-02T19:48:32.267337215Z" level=info msg="Stop container \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\" with signal terminated"
Oct  2 19:48:32.277662 env[1632]: time="2023-10-02T19:48:32.277613133Z" level=info msg="shim disconnected" id=fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8
Oct  2 19:48:32.277662 env[1632]: time="2023-10-02T19:48:32.277660041Z" level=warning msg="cleaning up after shim disconnected" id=fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8 namespace=k8s.io
Oct  2 19:48:32.281140 env[1632]: time="2023-10-02T19:48:32.277675721Z" level=info msg="cleaning up dead shim"
Oct  2 19:48:32.280991 systemd[1]: cri-containerd-4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd.scope: Deactivated successfully.
Oct  2 19:48:32.279000 audit: BPF prog-id=99 op=UNLOAD
Oct  2 19:48:32.284658 kernel: audit: type=1334 audit(1696276112.279:814): prog-id=99 op=UNLOAD
Oct  2 19:48:32.289000 audit: BPF prog-id=102 op=UNLOAD
Oct  2 19:48:32.293568 kernel: audit: type=1334 audit(1696276112.289:815): prog-id=102 op=UNLOAD
Oct  2 19:48:32.301312 env[1632]: time="2023-10-02T19:48:32.301265043Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:48:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3336 runtime=io.containerd.runc.v2\n"
Oct  2 19:48:32.302204 env[1632]: time="2023-10-02T19:48:32.302167814Z" level=info msg="TearDown network for sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" successfully"
Oct  2 19:48:32.302628 env[1632]: time="2023-10-02T19:48:32.302533251Z" level=info msg="StopPodSandbox for \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" returns successfully"
Oct  2 19:48:32.319113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd-rootfs.mount: Deactivated successfully.
Oct  2 19:48:32.331710 env[1632]: time="2023-10-02T19:48:32.331657228Z" level=info msg="shim disconnected" id=4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd
Oct  2 19:48:32.331710 env[1632]: time="2023-10-02T19:48:32.331709142Z" level=warning msg="cleaning up after shim disconnected" id=4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd namespace=k8s.io
Oct  2 19:48:32.332023 env[1632]: time="2023-10-02T19:48:32.331721069Z" level=info msg="cleaning up dead shim"
Oct  2 19:48:32.342144 env[1632]: time="2023-10-02T19:48:32.342098996Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:48:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3362 runtime=io.containerd.runc.v2\n"
Oct  2 19:48:32.344505 env[1632]: time="2023-10-02T19:48:32.344452549Z" level=info msg="StopContainer for \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\" returns successfully"
Oct  2 19:48:32.345087 env[1632]: time="2023-10-02T19:48:32.345032171Z" level=info msg="StopPodSandbox for \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\""
Oct  2 19:48:32.345252 env[1632]: time="2023-10-02T19:48:32.345117024Z" level=info msg="Container to stop \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Oct  2 19:48:32.348300 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d-shm.mount: Deactivated successfully.
Oct  2 19:48:32.355000 audit: BPF prog-id=91 op=UNLOAD
Oct  2 19:48:32.356306 systemd[1]: cri-containerd-4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d.scope: Deactivated successfully.
Oct  2 19:48:32.361433 kernel: audit: type=1334 audit(1696276112.355:816): prog-id=91 op=UNLOAD
Oct  2 19:48:32.361570 kernel: audit: type=1334 audit(1696276112.358:817): prog-id=94 op=UNLOAD
Oct  2 19:48:32.358000 audit: BPF prog-id=94 op=UNLOAD
Oct  2 19:48:32.384919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d-rootfs.mount: Deactivated successfully.
Oct  2 19:48:32.399023 env[1632]: time="2023-10-02T19:48:32.398969222Z" level=info msg="shim disconnected" id=4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d
Oct  2 19:48:32.399023 env[1632]: time="2023-10-02T19:48:32.399021431Z" level=warning msg="cleaning up after shim disconnected" id=4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d namespace=k8s.io
Oct  2 19:48:32.399465 env[1632]: time="2023-10-02T19:48:32.399032994Z" level=info msg="cleaning up dead shim"
Oct  2 19:48:32.409725 env[1632]: time="2023-10-02T19:48:32.409679930Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:48:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3394 runtime=io.containerd.runc.v2\n"
Oct  2 19:48:32.410046 env[1632]: time="2023-10-02T19:48:32.410011082Z" level=info msg="TearDown network for sandbox \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" successfully"
Oct  2 19:48:32.410130 env[1632]: time="2023-10-02T19:48:32.410042844Z" level=info msg="StopPodSandbox for \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" returns successfully"
Oct  2 19:48:32.422744 kubelet[2091]: I1002 19:48:32.422711    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-xtables-lock\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.422744 kubelet[2091]: I1002 19:48:32.422753    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cni-path\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423112 kubelet[2091]: I1002 19:48:32.422877    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-hubble-tls\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423112 kubelet[2091]: I1002 19:48:32.422905    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-cgroup\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423112 kubelet[2091]: I1002 19:48:32.422927    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-hostproc\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423112 kubelet[2091]: I1002 19:48:32.422948    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-bpf-maps\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423112 kubelet[2091]: I1002 19:48:32.422980    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-ipsec-secrets\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423112 kubelet[2091]: I1002 19:48:32.423008    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0035927f-7e03-42ca-864b-8e75e1ee8bae-clustermesh-secrets\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423382 kubelet[2091]: I1002 19:48:32.423035    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-run\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423382 kubelet[2091]: I1002 19:48:32.423059    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-etc-cni-netd\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423382 kubelet[2091]: I1002 19:48:32.423091    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-445kg\" (UniqueName: \"kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-kube-api-access-445kg\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423382 kubelet[2091]: I1002 19:48:32.423116    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-host-proc-sys-net\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423382 kubelet[2091]: I1002 19:48:32.423144    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-host-proc-sys-kernel\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423382 kubelet[2091]: I1002 19:48:32.423172    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-lib-modules\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423759 kubelet[2091]: I1002 19:48:32.423204    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-config-path\") pod \"0035927f-7e03-42ca-864b-8e75e1ee8bae\" (UID: \"0035927f-7e03-42ca-864b-8e75e1ee8bae\") "
Oct  2 19:48:32.423759 kubelet[2091]: W1002 19:48:32.423420    2091 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0035927f-7e03-42ca-864b-8e75e1ee8bae/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Oct  2 19:48:32.425498 kubelet[2091]: I1002 19:48:32.424771    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.425498 kubelet[2091]: I1002 19:48:32.424823    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cni-path" (OuterVolumeSpecName: "cni-path") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.427567 kubelet[2091]: I1002 19:48:32.425975    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.427567 kubelet[2091]: I1002 19:48:32.426014    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-hostproc" (OuterVolumeSpecName: "hostproc") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.427567 kubelet[2091]: I1002 19:48:32.426035    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.427567 kubelet[2091]: I1002 19:48:32.426240    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.427567 kubelet[2091]: I1002 19:48:32.426268    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.427971 kubelet[2091]: I1002 19:48:32.426373    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.427971 kubelet[2091]: I1002 19:48:32.426775    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Oct  2 19:48:32.427971 kubelet[2091]: I1002 19:48:32.426815    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.427971 kubelet[2091]: I1002 19:48:32.427337    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Oct  2 19:48:32.434047 kubelet[2091]: I1002 19:48:32.434010    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-kube-api-access-445kg" (OuterVolumeSpecName: "kube-api-access-445kg") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "kube-api-access-445kg". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct  2 19:48:32.434443 kubelet[2091]: I1002 19:48:32.434338    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0035927f-7e03-42ca-864b-8e75e1ee8bae-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Oct  2 19:48:32.434696 kubelet[2091]: I1002 19:48:32.434401    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Oct  2 19:48:32.435795 kubelet[2091]: I1002 19:48:32.435764    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0035927f-7e03-42ca-864b-8e75e1ee8bae" (UID: "0035927f-7e03-42ca-864b-8e75e1ee8bae"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct  2 19:48:32.527072 kubelet[2091]: I1002 19:48:32.523975    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w9hw\" (UniqueName: \"kubernetes.io/projected/98d42e9c-39d3-4025-b857-144f217685bd-kube-api-access-9w9hw\") pod \"98d42e9c-39d3-4025-b857-144f217685bd\" (UID: \"98d42e9c-39d3-4025-b857-144f217685bd\") "
Oct  2 19:48:32.527343 kubelet[2091]: I1002 19:48:32.527322    2091 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98d42e9c-39d3-4025-b857-144f217685bd-cilium-config-path\") pod \"98d42e9c-39d3-4025-b857-144f217685bd\" (UID: \"98d42e9c-39d3-4025-b857-144f217685bd\") "
Oct  2 19:48:32.527471 kubelet[2091]: I1002 19:48:32.527459    2091 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-lib-modules\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.528148 kubelet[2091]: I1002 19:48:32.528059    2091 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-config-path\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.528517 kubelet[2091]: I1002 19:48:32.528504    2091 reconciler.go:399] "Volume detached for volume \"kube-api-access-445kg\" (UniqueName: \"kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-kube-api-access-445kg\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.528624 kubelet[2091]: I1002 19:48:32.528614    2091 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-host-proc-sys-net\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.528713 kubelet[2091]: I1002 19:48:32.528705    2091 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-host-proc-sys-kernel\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.528787 kubelet[2091]: I1002 19:48:32.528779    2091 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cni-path\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.528995 kubelet[2091]: I1002 19:48:32.528984    2091 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0035927f-7e03-42ca-864b-8e75e1ee8bae-hubble-tls\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.529087 kubelet[2091]: I1002 19:48:32.529078    2091 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-xtables-lock\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.529161 kubelet[2091]: I1002 19:48:32.529154    2091 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-cgroup\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.529242 kubelet[2091]: I1002 19:48:32.529234    2091 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-hostproc\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.529322 kubelet[2091]: I1002 19:48:32.529314    2091 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-run\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.529416 kubelet[2091]: I1002 19:48:32.529402    2091 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-etc-cni-netd\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.529510 kubelet[2091]: I1002 19:48:32.529494    2091 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0035927f-7e03-42ca-864b-8e75e1ee8bae-bpf-maps\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.529592 kubelet[2091]: I1002 19:48:32.529585    2091 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0035927f-7e03-42ca-864b-8e75e1ee8bae-cilium-ipsec-secrets\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.529672 kubelet[2091]: I1002 19:48:32.529664    2091 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0035927f-7e03-42ca-864b-8e75e1ee8bae-clustermesh-secrets\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.529925 kubelet[2091]: W1002 19:48:32.529872    2091 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/98d42e9c-39d3-4025-b857-144f217685bd/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Oct  2 19:48:32.532184 kubelet[2091]: I1002 19:48:32.532145    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d42e9c-39d3-4025-b857-144f217685bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "98d42e9c-39d3-4025-b857-144f217685bd" (UID: "98d42e9c-39d3-4025-b857-144f217685bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Oct  2 19:48:32.532270 kubelet[2091]: I1002 19:48:32.532245    2091 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d42e9c-39d3-4025-b857-144f217685bd-kube-api-access-9w9hw" (OuterVolumeSpecName: "kube-api-access-9w9hw") pod "98d42e9c-39d3-4025-b857-144f217685bd" (UID: "98d42e9c-39d3-4025-b857-144f217685bd"). InnerVolumeSpecName "kube-api-access-9w9hw". PluginName "kubernetes.io/projected", VolumeGidValue ""
Oct  2 19:48:32.537264 kubelet[2091]: I1002 19:48:32.537245    2091 scope.go:115] "RemoveContainer" containerID="4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd"
Oct  2 19:48:32.539066 env[1632]: time="2023-10-02T19:48:32.539022413Z" level=info msg="RemoveContainer for \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\""
Oct  2 19:48:32.543199 env[1632]: time="2023-10-02T19:48:32.543082202Z" level=info msg="RemoveContainer for \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\" returns successfully"
Oct  2 19:48:32.543824 kubelet[2091]: I1002 19:48:32.543804    2091 scope.go:115] "RemoveContainer" containerID="4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd"
Oct  2 19:48:32.544091 env[1632]: time="2023-10-02T19:48:32.544010687Z" level=error msg="ContainerStatus for \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\": not found"
Oct  2 19:48:32.547049 systemd[1]: Removed slice kubepods-burstable-pod0035927f_7e03_42ca_864b_8e75e1ee8bae.slice.
Oct  2 19:48:32.548747 kubelet[2091]: E1002 19:48:32.548728    2091 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\": not found" containerID="4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd"
Oct  2 19:48:32.549138 kubelet[2091]: I1002 19:48:32.549113    2091 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd} err="failed to get container status \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\": not found"
Oct  2 19:48:32.549239 kubelet[2091]: I1002 19:48:32.549141    2091 scope.go:115] "RemoveContainer" containerID="f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab"
Oct  2 19:48:32.551271 env[1632]: time="2023-10-02T19:48:32.550402864Z" level=info msg="RemoveContainer for \"f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab\""
Oct  2 19:48:32.553013 systemd[1]: Removed slice kubepods-besteffort-pod98d42e9c_39d3_4025_b857_144f217685bd.slice.
Oct  2 19:48:32.555348 env[1632]: time="2023-10-02T19:48:32.555305253Z" level=info msg="RemoveContainer for \"f0f00a8cb09323c2aa80e4816a39d34c46821a88ac44ed3bb2292f5271c1c1ab\" returns successfully"
Oct  2 19:48:32.630341 kubelet[2091]: I1002 19:48:32.630296    2091 reconciler.go:399] "Volume detached for volume \"kube-api-access-9w9hw\" (UniqueName: \"kubernetes.io/projected/98d42e9c-39d3-4025-b857-144f217685bd-kube-api-access-9w9hw\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.630341 kubelet[2091]: I1002 19:48:32.630348    2091 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98d42e9c-39d3-4025-b857-144f217685bd-cilium-config-path\") on node \"172.31.22.191\" DevicePath \"\""
Oct  2 19:48:32.810537 kubelet[2091]: E1002 19:48:32.810404    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:32.939471 env[1632]: time="2023-10-02T19:48:32.938889423Z" level=info msg="StopPodSandbox for \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\""
Oct  2 19:48:32.939471 env[1632]: time="2023-10-02T19:48:32.939004606Z" level=info msg="TearDown network for sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" successfully"
Oct  2 19:48:32.939471 env[1632]: time="2023-10-02T19:48:32.939050130Z" level=info msg="StopPodSandbox for \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" returns successfully"
Oct  2 19:48:32.939471 env[1632]: time="2023-10-02T19:48:32.939330383Z" level=info msg="StopContainer for \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\" with timeout 1 (s)"
Oct  2 19:48:32.939471 env[1632]: time="2023-10-02T19:48:32.939366494Z" level=error msg="StopContainer for \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\": not found"
Oct  2 19:48:32.939866 kubelet[2091]: E1002 19:48:32.939746    2091 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd\": not found" containerID="4a843bd956e2624b10a57b6faf24f6ff2f13025c06786878aa86161e91ade5dd"
Oct  2 19:48:32.940429 env[1632]: time="2023-10-02T19:48:32.940231974Z" level=info msg="StopPodSandbox for \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\""
Oct  2 19:48:32.940429 env[1632]: time="2023-10-02T19:48:32.940326657Z" level=info msg="TearDown network for sandbox \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" successfully"
Oct  2 19:48:32.940429 env[1632]: time="2023-10-02T19:48:32.940381041Z" level=info msg="StopPodSandbox for \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" returns successfully"
Oct  2 19:48:32.941815 kubelet[2091]: I1002 19:48:32.940937    2091 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0035927f-7e03-42ca-864b-8e75e1ee8bae path="/var/lib/kubelet/pods/0035927f-7e03-42ca-864b-8e75e1ee8bae/volumes"
Oct  2 19:48:32.941815 kubelet[2091]: I1002 19:48:32.941474    2091 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=98d42e9c-39d3-4025-b857-144f217685bd path="/var/lib/kubelet/pods/98d42e9c-39d3-4025-b857-144f217685bd/volumes"
Oct  2 19:48:33.225285 systemd[1]: var-lib-kubelet-pods-0035927f\x2d7e03\x2d42ca\x2d864b\x2d8e75e1ee8bae-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Oct  2 19:48:33.225412 systemd[1]: var-lib-kubelet-pods-0035927f\x2d7e03\x2d42ca\x2d864b\x2d8e75e1ee8bae-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Oct  2 19:48:33.225516 systemd[1]: var-lib-kubelet-pods-0035927f\x2d7e03\x2d42ca\x2d864b\x2d8e75e1ee8bae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d445kg.mount: Deactivated successfully.
Oct  2 19:48:33.225601 systemd[1]: var-lib-kubelet-pods-0035927f\x2d7e03\x2d42ca\x2d864b\x2d8e75e1ee8bae-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Oct  2 19:48:33.225679 systemd[1]: var-lib-kubelet-pods-98d42e9c\x2d39d3\x2d4025\x2db857\x2d144f217685bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9w9hw.mount: Deactivated successfully.
Oct  2 19:48:33.811420 kubelet[2091]: E1002 19:48:33.811332    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:34.811539 kubelet[2091]: E1002 19:48:34.811470    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:35.812465 kubelet[2091]: E1002 19:48:35.812411    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:35.863707 kubelet[2091]: E1002 19:48:35.863676    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:36.813265 kubelet[2091]: E1002 19:48:36.813220    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:37.814417 kubelet[2091]: E1002 19:48:37.814333    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:38.815240 kubelet[2091]: E1002 19:48:38.815190    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:39.815799 kubelet[2091]: E1002 19:48:39.815756    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:40.621278 kubelet[2091]: E1002 19:48:40.621228    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:40.816942 kubelet[2091]: E1002 19:48:40.816890    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:40.865280 kubelet[2091]: E1002 19:48:40.865249    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:41.817052 kubelet[2091]: E1002 19:48:41.816998    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:42.817921 kubelet[2091]: E1002 19:48:42.817870    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:43.819040 kubelet[2091]: E1002 19:48:43.818998    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:44.820238 kubelet[2091]: E1002 19:48:44.820179    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:45.820448 kubelet[2091]: E1002 19:48:45.820390    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:45.866174 kubelet[2091]: E1002 19:48:45.866134    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:46.311295 amazon-ssm-agent[1609]: 2023-10-02 19:48:46 INFO Backing off health check to every 600 seconds for 1800 seconds.
Oct  2 19:48:46.411649 amazon-ssm-agent[1609]: 2023-10-02 19:48:46 ERROR Health ping failed with error - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-02b440a24027c297f is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-02b440a24027c297f because no identity-based policy allows the ssm:UpdateInstanceInformation action
Oct  2 19:48:46.411649 amazon-ssm-agent[1609]:         status code: 400, request id: 2c36b81b-045b-4102-8263-88af993cc386
Oct  2 19:48:46.820769 kubelet[2091]: E1002 19:48:46.820731    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:47.821768 kubelet[2091]: E1002 19:48:47.821715    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:48.633601 kubelet[2091]: E1002 19:48:48.633434    2091 controller.go:187] failed to update lease, error: Put "https://172.31.19.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.191?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Oct  2 19:48:48.822430 kubelet[2091]: E1002 19:48:48.822373    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:49.823043 kubelet[2091]: E1002 19:48:49.823001    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:50.823908 kubelet[2091]: E1002 19:48:50.823859    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:50.867546 kubelet[2091]: E1002 19:48:50.867507    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:51.824935 kubelet[2091]: E1002 19:48:51.824879    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:52.826122 kubelet[2091]: E1002 19:48:52.825994    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:53.826961 kubelet[2091]: E1002 19:48:53.826905    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:54.827725 kubelet[2091]: E1002 19:48:54.827672    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:55.828092 kubelet[2091]: E1002 19:48:55.828042    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:55.869072 kubelet[2091]: E1002 19:48:55.869033    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:48:56.828879 kubelet[2091]: E1002 19:48:56.828835    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:57.829821 kubelet[2091]: E1002 19:48:57.829768    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:58.634136 kubelet[2091]: E1002 19:48:58.634020    2091 controller.go:187] failed to update lease, error: Put "https://172.31.19.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.191?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Oct  2 19:48:58.830289 kubelet[2091]: E1002 19:48:58.830234    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:48:59.830388 kubelet[2091]: E1002 19:48:59.830332    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:00.620965 kubelet[2091]: E1002 19:49:00.620911    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:00.671377 env[1632]: time="2023-10-02T19:49:00.671331311Z" level=info msg="StopPodSandbox for \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\""
Oct  2 19:49:00.672338 env[1632]: time="2023-10-02T19:49:00.671433692Z" level=info msg="TearDown network for sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" successfully"
Oct  2 19:49:00.672338 env[1632]: time="2023-10-02T19:49:00.671479632Z" level=info msg="StopPodSandbox for \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" returns successfully"
Oct  2 19:49:00.672799 env[1632]: time="2023-10-02T19:49:00.672687746Z" level=info msg="RemovePodSandbox for \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\""
Oct  2 19:49:00.672922 env[1632]: time="2023-10-02T19:49:00.672806115Z" level=info msg="Forcibly stopping sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\""
Oct  2 19:49:00.672922 env[1632]: time="2023-10-02T19:49:00.672898229Z" level=info msg="TearDown network for sandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" successfully"
Oct  2 19:49:00.676628 env[1632]: time="2023-10-02T19:49:00.676574321Z" level=info msg="RemovePodSandbox \"fa9d387b9b13d0a7a0393a0c9e8a5b9b0858100efcd64cd69fae17c376a4b1c8\" returns successfully"
Oct  2 19:49:00.677226 env[1632]: time="2023-10-02T19:49:00.677189794Z" level=info msg="StopPodSandbox for \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\""
Oct  2 19:49:00.677347 env[1632]: time="2023-10-02T19:49:00.677285096Z" level=info msg="TearDown network for sandbox \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" successfully"
Oct  2 19:49:00.677347 env[1632]: time="2023-10-02T19:49:00.677329329Z" level=info msg="StopPodSandbox for \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" returns successfully"
Oct  2 19:49:00.677814 env[1632]: time="2023-10-02T19:49:00.677782523Z" level=info msg="RemovePodSandbox for \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\""
Oct  2 19:49:00.677905 env[1632]: time="2023-10-02T19:49:00.677818209Z" level=info msg="Forcibly stopping sandbox \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\""
Oct  2 19:49:00.677966 env[1632]: time="2023-10-02T19:49:00.677900833Z" level=info msg="TearDown network for sandbox \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" successfully"
Oct  2 19:49:00.681542 env[1632]: time="2023-10-02T19:49:00.681479920Z" level=info msg="RemovePodSandbox \"4e2a540339c368b7545816580f2d8cb3b7a452c1627618272ef5c24c9fa0e90d\" returns successfully"
Oct  2 19:49:00.830927 kubelet[2091]: E1002 19:49:00.830890    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:00.870625 kubelet[2091]: E1002 19:49:00.870598    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:49:01.832124 kubelet[2091]: E1002 19:49:01.832070    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:02.832919 kubelet[2091]: E1002 19:49:02.832867    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:03.833405 kubelet[2091]: E1002 19:49:03.833353    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:04.833589 kubelet[2091]: E1002 19:49:04.833536    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:05.834613 kubelet[2091]: E1002 19:49:05.834561    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:05.871476 kubelet[2091]: E1002 19:49:05.871443    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:49:06.835293 kubelet[2091]: E1002 19:49:06.835236    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:07.836322 kubelet[2091]: E1002 19:49:07.836266    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:08.080600 kubelet[2091]: E1002 19:49:08.080564    2091 controller.go:187] failed to update lease, error: Put "https://172.31.19.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.191?timeout=10s": unexpected EOF
Oct  2 19:49:08.082551 kubelet[2091]: E1002 19:49:08.082514    2091 controller.go:187] failed to update lease, error: Put "https://172.31.19.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.191?timeout=10s": dial tcp 172.31.19.18:6443: connect: connection refused
Oct  2 19:49:08.083742 kubelet[2091]: E1002 19:49:08.083701    2091 controller.go:187] failed to update lease, error: Put "https://172.31.19.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.191?timeout=10s": dial tcp 172.31.19.18:6443: connect: connection refused
Oct  2 19:49:08.083742 kubelet[2091]: I1002 19:49:08.083739    2091 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease
Oct  2 19:49:08.084598 kubelet[2091]: E1002 19:49:08.084452    2091 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.19.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.191?timeout=10s": dial tcp 172.31.19.18:6443: connect: connection refused
Oct  2 19:49:08.286533 kubelet[2091]: E1002 19:49:08.286457    2091 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.19.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.191?timeout=10s": dial tcp 172.31.19.18:6443: connect: connection refused
Oct  2 19:49:08.688022 kubelet[2091]: E1002 19:49:08.687954    2091 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.19.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.191?timeout=10s": dial tcp 172.31.19.18:6443: connect: connection refused
Oct  2 19:49:08.836463 kubelet[2091]: E1002 19:49:08.836419    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:09.836784 kubelet[2091]: E1002 19:49:09.836741    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:10.836984 kubelet[2091]: E1002 19:49:10.836934    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:10.872538 kubelet[2091]: E1002 19:49:10.872508    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:49:11.837704 kubelet[2091]: E1002 19:49:11.837652    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:12.837846 kubelet[2091]: E1002 19:49:12.837784    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:13.838769 kubelet[2091]: E1002 19:49:13.838728    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:14.839104 kubelet[2091]: E1002 19:49:14.839053    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:15.840153 kubelet[2091]: E1002 19:49:15.840113    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:15.873973 kubelet[2091]: E1002 19:49:15.873877    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:49:16.840311 kubelet[2091]: E1002 19:49:16.840252    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:17.841014 kubelet[2091]: E1002 19:49:17.840964    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:18.841196 kubelet[2091]: E1002 19:49:18.841143    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:19.276634 kubelet[2091]: E1002 19:49:19.276589    2091 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"172.31.22.191\": Get \"https://172.31.19.18:6443/api/v1/nodes/172.31.22.191?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Oct  2 19:49:19.489393 kubelet[2091]: E1002 19:49:19.489336    2091 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.19.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.191?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Oct  2 19:49:19.841588 kubelet[2091]: E1002 19:49:19.841530    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:20.620606 kubelet[2091]: E1002 19:49:20.620558    2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:20.842087 kubelet[2091]: E1002 19:49:20.842035    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:20.875105 kubelet[2091]: E1002 19:49:20.874816    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:49:21.843154 kubelet[2091]: E1002 19:49:21.843087    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:22.843819 kubelet[2091]: E1002 19:49:22.843771    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:23.844732 kubelet[2091]: E1002 19:49:23.844676    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:24.845351 kubelet[2091]: E1002 19:49:24.845301    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:25.846614 kubelet[2091]: E1002 19:49:25.845455    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:25.876355 kubelet[2091]: E1002 19:49:25.876326    2091 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct  2 19:49:26.846138 kubelet[2091]: E1002 19:49:26.846082    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Oct  2 19:49:27.847252 kubelet[2091]: E1002 19:49:27.847195    2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"