Oct 2 19:54:51.156906 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:54:51.156941 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:54:51.156956 kernel: BIOS-provided physical RAM map: Oct 2 19:54:51.156968 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 19:54:51.156979 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 19:54:51.156990 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 19:54:51.157008 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Oct 2 19:54:51.157020 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Oct 2 19:54:51.157031 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Oct 2 19:54:51.157043 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 19:54:51.157054 kernel: NX (Execute Disable) protection: active Oct 2 19:54:51.157065 kernel: SMBIOS 2.7 present. Oct 2 19:54:51.157075 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Oct 2 19:54:51.157085 kernel: Hypervisor detected: KVM Oct 2 19:54:51.157101 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:54:51.157113 kernel: kvm-clock: cpu 0, msr 13f8a001, primary cpu clock Oct 2 19:54:51.157125 kernel: kvm-clock: using sched offset of 6101026659 cycles Oct 2 19:54:51.157138 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:54:51.157151 kernel: tsc: Detected 2500.006 MHz processor Oct 2 19:54:51.157164 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:54:51.157180 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:54:51.157193 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Oct 2 19:54:51.157206 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:54:51.157218 kernel: Using GB pages for direct mapping Oct 2 19:54:51.157231 kernel: ACPI: Early table checksum verification disabled Oct 2 19:54:51.157245 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Oct 2 19:54:51.157257 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Oct 2 19:54:51.157268 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:54:51.157296 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Oct 2 19:54:51.157312 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Oct 2 19:54:51.157326 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 2 19:54:51.157339 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:54:51.157351 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Oct 2 19:54:51.157364 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:54:51.157378 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Oct 2 19:54:51.157391 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Oct 2 19:54:51.157404 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 2 19:54:51.157419 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Oct 2 19:54:51.157432 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Oct 2 19:54:51.157444 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Oct 2 19:54:51.157463 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Oct 2 19:54:51.157477 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Oct 2 19:54:51.157491 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Oct 2 19:54:51.157505 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Oct 2 19:54:51.157522 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Oct 2 19:54:51.157536 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Oct 2 19:54:51.157550 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Oct 2 19:54:51.157563 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 2 19:54:51.157578 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 2 19:54:51.157591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Oct 2 19:54:51.157605 kernel: NUMA: Initialized distance table, cnt=1 Oct 2 19:54:51.157633 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Oct 2 19:54:51.157651 kernel: Zone ranges: Oct 2 19:54:51.157665 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:54:51.157680 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Oct 2 19:54:51.157693 kernel: Normal empty Oct 2 19:54:51.157708 kernel: Movable zone start for each node Oct 2 19:54:51.157721 kernel: Early memory node ranges Oct 2 19:54:51.157734 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 19:54:51.157747 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Oct 2 19:54:51.157761 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Oct 2 19:54:51.157778 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:54:51.157792 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 19:54:51.157806 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Oct 2 19:54:51.157819 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 2 19:54:51.157834 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:54:51.157848 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Oct 2 19:54:51.157862 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:54:51.157876 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:54:51.157890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:54:51.157908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:54:51.157922 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:54:51.157935 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:54:51.157949 kernel: TSC deadline timer available Oct 2 19:54:51.157963 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 19:54:51.157978 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Oct 2 19:54:51.157992 kernel: Booting paravirtualized kernel on KVM Oct 2 19:54:51.158005 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:54:51.158019 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 19:54:51.158036 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 19:54:51.158050 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 19:54:51.158064 kernel: pcpu-alloc: [0] 0 1 Oct 2 19:54:51.158088 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Oct 2 19:54:51.158102 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:54:51.158117 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:54:51.158133 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Oct 2 19:54:51.158147 kernel: Policy zone: DMA32 Oct 2 19:54:51.158163 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:54:51.158182 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:54:51.158196 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:54:51.158210 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 19:54:51.158224 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:54:51.158239 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 121024K reserved, 0K cma-reserved) Oct 2 19:54:51.158253 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:54:51.158266 kernel: Kernel/User page tables isolation: enabled Oct 2 19:54:51.158278 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:54:51.158295 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:54:51.158309 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:54:51.158325 kernel: rcu: RCU event tracing is enabled. Oct 2 19:54:51.158341 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:54:51.158356 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:54:51.158368 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:54:51.158383 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:54:51.158399 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:54:51.158414 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 2 19:54:51.158432 kernel: random: crng init done Oct 2 19:54:51.158447 kernel: Console: colour VGA+ 80x25 Oct 2 19:54:51.158462 kernel: printk: console [ttyS0] enabled Oct 2 19:54:51.158476 kernel: ACPI: Core revision 20210730 Oct 2 19:54:51.158491 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Oct 2 19:54:51.158505 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:54:51.158520 kernel: x2apic enabled Oct 2 19:54:51.158534 kernel: Switched APIC routing to physical x2apic. Oct 2 19:54:51.158549 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Oct 2 19:54:51.158567 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Oct 2 19:54:51.158581 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 2 19:54:51.158595 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 2 19:54:51.158610 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:54:51.158652 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:54:51.158671 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:54:51.158685 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:54:51.158700 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 2 19:54:51.158715 kernel: RETBleed: Vulnerable Oct 2 19:54:51.158729 kernel: Speculative Store Bypass: Vulnerable Oct 2 19:54:51.158744 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:54:51.158758 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:54:51.158772 kernel: GDS: Unknown: Dependent on hypervisor status Oct 2 19:54:51.158787 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:54:51.158807 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:54:51.158822 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:54:51.158837 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Oct 2 19:54:51.158851 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Oct 2 19:54:51.158866 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 2 19:54:51.158881 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 2 19:54:51.158898 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 2 19:54:51.158913 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Oct 2 19:54:51.158928 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:54:51.158943 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Oct 2 19:54:51.158957 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Oct 2 19:54:51.158973 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Oct 2 19:54:51.158988 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Oct 2 19:54:51.159004 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Oct 2 19:54:51.159019 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Oct 2 19:54:51.159036 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Oct 2 19:54:51.159052 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:54:51.159071 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:54:51.159086 kernel: LSM: Security Framework initializing Oct 2 19:54:51.159101 kernel: SELinux: Initializing. Oct 2 19:54:51.159117 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:54:51.159132 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:54:51.159147 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 2 19:54:51.159162 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 2 19:54:51.159177 kernel: signal: max sigframe size: 3632 Oct 2 19:54:51.159192 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:54:51.159207 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 2 19:54:51.159225 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:54:51.159240 kernel: x86: Booting SMP configuration: Oct 2 19:54:51.159255 kernel: .... node #0, CPUs: #1 Oct 2 19:54:51.159270 kernel: kvm-clock: cpu 1, msr 13f8a041, secondary cpu clock Oct 2 19:54:51.159285 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Oct 2 19:54:51.159300 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 2 19:54:51.159318 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 2 19:54:51.159334 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:54:51.159348 kernel: smpboot: Max logical packages: 1 Oct 2 19:54:51.159367 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Oct 2 19:54:51.159381 kernel: devtmpfs: initialized Oct 2 19:54:51.159396 kernel: x86/mm: Memory block size: 128MB Oct 2 19:54:51.159411 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:54:51.159427 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:54:51.159442 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:54:51.159456 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:54:51.159471 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:54:51.159486 kernel: audit: type=2000 audit(1696276490.054:1): state=initialized audit_enabled=0 res=1 Oct 2 19:54:51.159503 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:54:51.159519 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:54:51.159534 kernel: cpuidle: using governor menu Oct 2 19:54:51.159551 kernel: ACPI: bus type PCI registered Oct 2 19:54:51.159565 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:54:51.159580 kernel: dca service started, version 1.12.1 Oct 2 19:54:51.159597 kernel: PCI: Using configuration type 1 for base access Oct 2 19:54:51.159614 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:54:51.159643 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:54:51.159658 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:54:51.159670 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:54:51.159682 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:54:51.159694 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:54:51.159708 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:54:51.159721 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:54:51.159735 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:54:51.159747 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:54:51.159758 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 2 19:54:51.159868 kernel: ACPI: Interpreter enabled Oct 2 19:54:51.159882 kernel: ACPI: PM: (supports S0 S5) Oct 2 19:54:51.159894 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:54:51.159907 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:54:51.159920 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 2 19:54:51.159934 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:54:51.160205 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:54:51.161351 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Oct 2 19:54:51.161383 kernel: acpiphp: Slot [3] registered Oct 2 19:54:51.161400 kernel: acpiphp: Slot [4] registered Oct 2 19:54:51.161415 kernel: acpiphp: Slot [5] registered Oct 2 19:54:51.161429 kernel: acpiphp: Slot [6] registered Oct 2 19:54:51.161444 kernel: acpiphp: Slot [7] registered Oct 2 19:54:51.161459 kernel: acpiphp: Slot [8] registered Oct 2 19:54:51.161474 kernel: acpiphp: Slot [9] registered Oct 2 19:54:51.161488 kernel: acpiphp: Slot [10] registered Oct 2 19:54:51.161503 kernel: acpiphp: Slot [11] registered Oct 2 19:54:51.161521 kernel: acpiphp: Slot [12] registered Oct 2 19:54:51.161535 kernel: acpiphp: Slot [13] registered Oct 2 19:54:51.161550 kernel: acpiphp: Slot [14] registered Oct 2 19:54:51.161564 kernel: acpiphp: Slot [15] registered Oct 2 19:54:51.161579 kernel: acpiphp: Slot [16] registered Oct 2 19:54:51.161594 kernel: acpiphp: Slot [17] registered Oct 2 19:54:51.161609 kernel: acpiphp: Slot [18] registered Oct 2 19:54:51.161636 kernel: acpiphp: Slot [19] registered Oct 2 19:54:51.161651 kernel: acpiphp: Slot [20] registered Oct 2 19:54:51.161669 kernel: acpiphp: Slot [21] registered Oct 2 19:54:51.161684 kernel: acpiphp: Slot [22] registered Oct 2 19:54:51.161698 kernel: acpiphp: Slot [23] registered Oct 2 19:54:51.161713 kernel: acpiphp: Slot [24] registered Oct 2 19:54:51.161728 kernel: acpiphp: Slot [25] registered Oct 2 19:54:51.161742 kernel: acpiphp: Slot [26] registered Oct 2 19:54:51.161756 kernel: acpiphp: Slot [27] registered Oct 2 19:54:51.161771 kernel: acpiphp: Slot [28] registered Oct 2 19:54:51.161785 kernel: acpiphp: Slot [29] registered Oct 2 19:54:51.161800 kernel: acpiphp: Slot [30] registered Oct 2 19:54:51.161818 kernel: acpiphp: Slot [31] registered Oct 2 19:54:51.161832 kernel: PCI host bridge to bus 0000:00 Oct 2 19:54:51.161977 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:54:51.162103 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:54:51.162226 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:54:51.166977 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 2 19:54:51.168340 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:54:51.169809 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:54:51.170099 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:54:51.170244 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Oct 2 19:54:51.170371 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 2 19:54:51.170494 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Oct 2 19:54:51.170614 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Oct 2 19:54:51.170868 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Oct 2 19:54:51.171003 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Oct 2 19:54:51.171125 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Oct 2 19:54:51.171246 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Oct 2 19:54:51.171366 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Oct 2 19:54:51.171494 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Oct 2 19:54:51.171626 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Oct 2 19:54:51.171751 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 2 19:54:51.171876 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:54:51.172003 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:54:51.172123 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Oct 2 19:54:51.172250 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:54:51.172371 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Oct 2 19:54:51.172389 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:54:51.172406 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:54:51.172418 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:54:51.172430 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:54:51.172444 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:54:51.172458 kernel: iommu: Default domain type: Translated Oct 2 19:54:51.172472 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:54:51.172594 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Oct 2 19:54:51.172719 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:54:51.172833 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Oct 2 19:54:51.172853 kernel: vgaarb: loaded Oct 2 19:54:51.172867 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:54:51.172881 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:54:51.172895 kernel: PTP clock support registered Oct 2 19:54:51.172909 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:54:51.172922 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:54:51.172936 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 19:54:51.172950 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Oct 2 19:54:51.172965 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Oct 2 19:54:51.172979 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Oct 2 19:54:51.172993 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:54:51.173006 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:54:51.173020 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:54:51.173033 kernel: pnp: PnP ACPI init Oct 2 19:54:51.173047 kernel: pnp: PnP ACPI: found 5 devices Oct 2 19:54:51.173060 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:54:51.173074 kernel: NET: Registered PF_INET protocol family Oct 2 19:54:51.173089 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:54:51.173103 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 2 19:54:51.173118 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:54:51.173131 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:54:51.173145 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 2 19:54:51.173159 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 2 19:54:51.173172 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:54:51.173185 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:54:51.173199 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:54:51.173215 kernel: NET: Registered PF_XDP protocol family Oct 2 19:54:51.173345 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:54:51.173449 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:54:51.173558 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:54:51.190637 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 2 19:54:51.190872 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:54:51.191013 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:54:51.191039 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:54:51.191055 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 2 19:54:51.191071 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Oct 2 19:54:51.191086 kernel: clocksource: Switched to clocksource tsc Oct 2 19:54:51.191101 kernel: Initialise system trusted keyrings Oct 2 19:54:51.191116 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 2 19:54:51.191130 kernel: Key type asymmetric registered Oct 2 19:54:51.191144 kernel: Asymmetric key parser 'x509' registered Oct 2 19:54:51.191159 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:54:51.191177 kernel: io scheduler mq-deadline registered Oct 2 19:54:51.191192 kernel: io scheduler kyber registered Oct 2 19:54:51.191207 kernel: io scheduler bfq registered Oct 2 19:54:51.191221 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:54:51.191236 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:54:51.191251 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:54:51.191266 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:54:51.191281 kernel: i8042: Warning: Keylock active Oct 2 19:54:51.191295 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:54:51.191313 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:54:51.191455 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 2 19:54:51.191574 kernel: rtc_cmos 00:00: registered as rtc0 Oct 2 19:54:51.191708 kernel: rtc_cmos 00:00: setting system clock to 2023-10-02T19:54:50 UTC (1696276490) Oct 2 19:54:51.191823 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 2 19:54:51.191842 kernel: intel_pstate: CPU model not supported Oct 2 19:54:51.191857 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:54:51.191872 kernel: Segment Routing with IPv6 Oct 2 19:54:51.191890 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:54:51.191904 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:54:51.191919 kernel: Key type dns_resolver registered Oct 2 19:54:51.191933 kernel: IPI shorthand broadcast: enabled Oct 2 19:54:51.191948 kernel: sched_clock: Marking stable (576307228, 325669564)->(1084954573, -182977781) Oct 2 19:54:51.191963 kernel: registered taskstats version 1 Oct 2 19:54:51.191978 kernel: Loading compiled-in X.509 certificates Oct 2 19:54:51.191992 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:54:51.192007 kernel: Key type .fscrypt registered Oct 2 19:54:51.192024 kernel: Key type fscrypt-provisioning registered Oct 2 19:54:51.192039 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:54:51.192054 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:54:51.192068 kernel: ima: No architecture policies found Oct 2 19:54:51.192083 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:54:51.192098 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:54:51.192113 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:54:51.192128 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:54:51.192142 kernel: Run /init as init process Oct 2 19:54:51.192159 kernel: with arguments: Oct 2 19:54:51.192174 kernel: /init Oct 2 19:54:51.192188 kernel: with environment: Oct 2 19:54:51.192202 kernel: HOME=/ Oct 2 19:54:51.192217 kernel: TERM=linux Oct 2 19:54:51.192231 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:54:51.192249 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:54:51.192267 systemd[1]: Detected virtualization amazon. Oct 2 19:54:51.192286 systemd[1]: Detected architecture x86-64. Oct 2 19:54:51.192302 systemd[1]: Running in initrd. Oct 2 19:54:51.192317 systemd[1]: No hostname configured, using default hostname. Oct 2 19:54:51.192332 systemd[1]: Hostname set to . Oct 2 19:54:51.192366 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:54:51.192384 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:54:51.192400 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:54:51.192415 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:54:51.192431 systemd[1]: Reached target cryptsetup.target. Oct 2 19:54:51.192447 systemd[1]: Reached target paths.target. Oct 2 19:54:51.192462 systemd[1]: Reached target slices.target. Oct 2 19:54:51.192478 systemd[1]: Reached target swap.target. Oct 2 19:54:51.192501 systemd[1]: Reached target timers.target. Oct 2 19:54:51.192522 systemd[1]: Listening on iscsid.socket. Oct 2 19:54:51.192538 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:54:51.192554 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:54:51.192570 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:54:51.192586 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:54:51.192603 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:54:51.192630 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:54:51.192646 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:54:51.192662 systemd[1]: Reached target sockets.target. Oct 2 19:54:51.192681 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:54:51.192697 systemd[1]: Finished network-cleanup.service. Oct 2 19:54:51.192713 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:54:51.192729 systemd[1]: Starting systemd-journald.service... Oct 2 19:54:51.192745 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:54:51.192761 systemd[1]: Starting systemd-resolved.service... Oct 2 19:54:51.192779 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:54:51.192795 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:54:51.192818 systemd-journald[185]: Journal started Oct 2 19:54:51.192897 systemd-journald[185]: Runtime Journal (/run/log/journal/ec231752562d4014a637b19d05be742e) is 4.8M, max 38.7M, 33.9M free. Oct 2 19:54:51.176173 systemd-modules-load[186]: Inserted module 'overlay' Oct 2 19:54:51.417925 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:54:51.417956 kernel: Bridge firewalling registered Oct 2 19:54:51.417968 kernel: SCSI subsystem initialized Oct 2 19:54:51.417978 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:54:51.417990 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:54:51.418003 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:54:51.219716 systemd-modules-load[186]: Inserted module 'br_netfilter' Oct 2 19:54:51.422283 systemd[1]: Started systemd-journald.service. Oct 2 19:54:51.422320 kernel: audit: type=1130 audit(1696276491.416:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.289965 systemd-resolved[187]: Positive Trust Anchors: Oct 2 19:54:51.289980 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:54:51.430781 kernel: audit: type=1130 audit(1696276491.427:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.290030 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:54:51.298060 systemd-resolved[187]: Defaulting to hostname 'linux'. Oct 2 19:54:51.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.299450 systemd-modules-load[186]: Inserted module 'dm_multipath' Oct 2 19:54:51.454728 kernel: audit: type=1130 audit(1696276491.444:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.429243 systemd[1]: Started systemd-resolved.service. Oct 2 19:54:51.462108 kernel: audit: type=1130 audit(1696276491.453:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.446389 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:54:51.476126 kernel: audit: type=1130 audit(1696276491.460:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.476169 kernel: audit: type=1130 audit(1696276491.466:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.455151 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:54:51.462172 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:54:51.468188 systemd[1]: Reached target nss-lookup.target. Oct 2 19:54:51.481254 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:54:51.485635 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:54:51.486939 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:54:51.499852 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:54:51.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.510463 kernel: audit: type=1130 audit(1696276491.500:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.507854 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:54:51.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.516636 kernel: audit: type=1130 audit(1696276491.507:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.516900 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:54:51.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.524967 kernel: audit: type=1130 audit(1696276491.517:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.520156 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:54:51.533983 dracut-cmdline[206]: dracut-dracut-053 Oct 2 19:54:51.537483 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:54:51.656644 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:54:51.674425 kernel: iscsi: registered transport (tcp) Oct 2 19:54:51.704779 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:54:51.704859 kernel: QLogic iSCSI HBA Driver Oct 2 19:54:51.750183 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:54:51.751549 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:54:51.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:51.811667 kernel: raid6: avx512x4 gen() 15429 MB/s Oct 2 19:54:51.829658 kernel: raid6: avx512x4 xor() 5774 MB/s Oct 2 19:54:51.846665 kernel: raid6: avx512x2 gen() 15101 MB/s Oct 2 19:54:51.864664 kernel: raid6: avx512x2 xor() 19712 MB/s Oct 2 19:54:51.882655 kernel: raid6: avx512x1 gen() 14721 MB/s Oct 2 19:54:51.900666 kernel: raid6: avx512x1 xor() 15010 MB/s Oct 2 19:54:51.918668 kernel: raid6: avx2x4 gen() 10088 MB/s Oct 2 19:54:51.938838 kernel: raid6: avx2x4 xor() 3107 MB/s Oct 2 19:54:51.956671 kernel: raid6: avx2x2 gen() 11641 MB/s Oct 2 19:54:51.974656 kernel: raid6: avx2x2 xor() 9146 MB/s Oct 2 19:54:51.992671 kernel: raid6: avx2x1 gen() 9974 MB/s Oct 2 19:54:52.010757 kernel: raid6: avx2x1 xor() 10703 MB/s Oct 2 19:54:52.028676 kernel: raid6: sse2x4 gen() 7125 MB/s Oct 2 19:54:52.046670 kernel: raid6: sse2x4 xor() 4291 MB/s Oct 2 19:54:52.064659 kernel: raid6: sse2x2 gen() 8668 MB/s Oct 2 19:54:52.081666 kernel: raid6: sse2x2 xor() 5674 MB/s Oct 2 19:54:52.099650 kernel: raid6: sse2x1 gen() 8352 MB/s Oct 2 19:54:52.118043 kernel: raid6: sse2x1 xor() 3692 MB/s Oct 2 19:54:52.118116 kernel: raid6: using algorithm avx512x4 gen() 15429 MB/s Oct 2 19:54:52.118135 kernel: raid6: .... xor() 5774 MB/s, rmw enabled Oct 2 19:54:52.119073 kernel: raid6: using avx512x2 recovery algorithm Oct 2 19:54:52.144654 kernel: xor: automatically using best checksumming function avx Oct 2 19:54:52.260645 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:54:52.271077 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:54:52.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.272000 audit: BPF prog-id=7 op=LOAD Oct 2 19:54:52.272000 audit: BPF prog-id=8 op=LOAD Oct 2 19:54:52.275144 systemd[1]: Starting systemd-udevd.service... Oct 2 19:54:52.304308 systemd-udevd[384]: Using default interface naming scheme 'v252'. Oct 2 19:54:52.314385 systemd[1]: Started systemd-udevd.service. Oct 2 19:54:52.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.317592 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:54:52.349273 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Oct 2 19:54:52.410851 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:54:52.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.414917 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:54:52.479318 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:54:52.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.566661 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:54:52.610224 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:54:52.610519 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:54:52.613638 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:54:52.614698 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Oct 2 19:54:52.616697 kernel: AES CTR mode by8 optimization enabled Oct 2 19:54:52.620716 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:31:5a:69:16:03 Oct 2 19:54:52.623833 (udev-worker)[435]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:54:52.869737 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:54:52.869933 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 2 19:54:52.869947 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:54:52.870048 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:54:52.870061 kernel: GPT:9289727 != 16777215 Oct 2 19:54:52.870071 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:54:52.870082 kernel: GPT:9289727 != 16777215 Oct 2 19:54:52.870092 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:54:52.870103 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:54:52.870113 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (437) Oct 2 19:54:52.760195 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:54:52.904094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:54:52.911536 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:54:52.924242 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:54:52.926731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:54:52.931251 systemd[1]: Starting disk-uuid.service... Oct 2 19:54:52.941094 disk-uuid[584]: Primary Header is updated. Oct 2 19:54:52.941094 disk-uuid[584]: Secondary Entries is updated. Oct 2 19:54:52.941094 disk-uuid[584]: Secondary Header is updated. Oct 2 19:54:52.948647 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:54:52.954096 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:54:52.961717 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:54:53.959648 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:54:53.959724 disk-uuid[585]: The operation has completed successfully. Oct 2 19:54:54.152964 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:54:54.153098 systemd[1]: Finished disk-uuid.service. Oct 2 19:54:54.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.169871 systemd[1]: Starting verity-setup.service... Oct 2 19:54:54.195743 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:54:54.271768 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:54:54.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.274343 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:54:54.275559 systemd[1]: Finished verity-setup.service. Oct 2 19:54:54.378077 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:54:54.379340 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:54:54.384972 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:54:54.389177 systemd[1]: Starting ignition-setup.service... Oct 2 19:54:54.395496 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:54:54.423220 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:54:54.423304 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:54:54.423325 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:54:54.449054 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:54:54.482681 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:54:54.503294 systemd[1]: Finished ignition-setup.service. Oct 2 19:54:54.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.507126 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:54:54.524915 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:54:54.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.527000 audit: BPF prog-id=9 op=LOAD Oct 2 19:54:54.529764 systemd[1]: Starting systemd-networkd.service... Oct 2 19:54:54.568478 systemd-networkd[1097]: lo: Link UP Oct 2 19:54:54.568959 systemd-networkd[1097]: lo: Gained carrier Oct 2 19:54:54.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.569763 systemd-networkd[1097]: Enumeration completed Oct 2 19:54:54.569881 systemd[1]: Started systemd-networkd.service. Oct 2 19:54:54.570350 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:54:54.580861 systemd[1]: Reached target network.target. Oct 2 19:54:54.596207 systemd[1]: Starting iscsiuio.service... Oct 2 19:54:54.599814 systemd-networkd[1097]: eth0: Link UP Oct 2 19:54:54.599909 systemd-networkd[1097]: eth0: Gained carrier Oct 2 19:54:54.609739 systemd[1]: Started iscsiuio.service. Oct 2 19:54:54.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.613299 systemd[1]: Starting iscsid.service... Oct 2 19:54:54.623984 iscsid[1102]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:54:54.623984 iscsid[1102]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:54:54.623984 iscsid[1102]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:54:54.623984 iscsid[1102]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:54:54.623984 iscsid[1102]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:54:54.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.649090 iscsid[1102]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:54:54.626079 systemd[1]: Started iscsid.service. Oct 2 19:54:54.626402 systemd-networkd[1097]: eth0: DHCPv4 address 172.31.18.171/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:54:54.642212 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:54:54.659034 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:54:54.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.659259 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:54:54.663199 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:54:54.665676 systemd[1]: Reached target remote-fs.target. Oct 2 19:54:54.674454 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:54:54.692174 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:54:54.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.183294 ignition[1089]: Ignition 2.14.0 Oct 2 19:54:55.183308 ignition[1089]: Stage: fetch-offline Oct 2 19:54:55.183500 ignition[1089]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:55.183580 ignition[1089]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:54:55.200024 ignition[1089]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:54:55.200402 ignition[1089]: Ignition finished successfully Oct 2 19:54:55.203601 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:54:55.218222 kernel: kauditd_printk_skb: 18 callbacks suppressed Oct 2 19:54:55.218569 kernel: audit: type=1130 audit(1696276495.203:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.206847 systemd[1]: Starting ignition-fetch.service... Oct 2 19:54:55.233448 ignition[1121]: Ignition 2.14.0 Oct 2 19:54:55.233462 ignition[1121]: Stage: fetch Oct 2 19:54:55.233769 ignition[1121]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:55.233802 ignition[1121]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:54:55.242725 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:54:55.244468 ignition[1121]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:54:55.264926 ignition[1121]: INFO : PUT result: OK Oct 2 19:54:55.268342 ignition[1121]: DEBUG : parsed url from cmdline: "" Oct 2 19:54:55.268342 ignition[1121]: INFO : no config URL provided Oct 2 19:54:55.268342 ignition[1121]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:54:55.268342 ignition[1121]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:54:55.274415 ignition[1121]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:54:55.274415 ignition[1121]: INFO : PUT result: OK Oct 2 19:54:55.274415 ignition[1121]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:54:55.279453 ignition[1121]: INFO : GET result: OK Oct 2 19:54:55.280413 ignition[1121]: DEBUG : parsing config with SHA512: c7401719058b8fc43023cf78cf42db6592a595c7927ee8d9e0792762fe10054ae0bc902e34f876e8ae3315d91f063860d9fec128cf15fb54785997bf24e5cbc0 Oct 2 19:54:55.300387 unknown[1121]: fetched base config from "system" Oct 2 19:54:55.300634 unknown[1121]: fetched base config from "system" Oct 2 19:54:55.301544 ignition[1121]: fetch: fetch complete Oct 2 19:54:55.300644 unknown[1121]: fetched user config from "aws" Oct 2 19:54:55.301551 ignition[1121]: fetch: fetch passed Oct 2 19:54:55.301603 ignition[1121]: Ignition finished successfully Oct 2 19:54:55.308748 systemd[1]: Finished ignition-fetch.service. Oct 2 19:54:55.322737 kernel: audit: type=1130 audit(1696276495.308:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.311427 systemd[1]: Starting ignition-kargs.service... Oct 2 19:54:55.333295 ignition[1127]: Ignition 2.14.0 Oct 2 19:54:55.333310 ignition[1127]: Stage: kargs Oct 2 19:54:55.333451 ignition[1127]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:55.333524 ignition[1127]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:54:55.344393 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:54:55.346034 ignition[1127]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:54:55.352019 ignition[1127]: INFO : PUT result: OK Oct 2 19:54:55.359182 ignition[1127]: kargs: kargs passed Oct 2 19:54:55.359250 ignition[1127]: Ignition finished successfully Oct 2 19:54:55.362024 systemd[1]: Finished ignition-kargs.service. Oct 2 19:54:55.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.366825 systemd[1]: Starting ignition-disks.service... Oct 2 19:54:55.376451 kernel: audit: type=1130 audit(1696276495.364:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.380940 ignition[1133]: Ignition 2.14.0 Oct 2 19:54:55.380953 ignition[1133]: Stage: disks Oct 2 19:54:55.381199 ignition[1133]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:55.381234 ignition[1133]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:54:55.394040 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:54:55.396164 ignition[1133]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:54:55.398052 ignition[1133]: INFO : PUT result: OK Oct 2 19:54:55.402439 ignition[1133]: disks: disks passed Oct 2 19:54:55.402511 ignition[1133]: Ignition finished successfully Oct 2 19:54:55.405172 systemd[1]: Finished ignition-disks.service. Oct 2 19:54:55.407558 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:54:55.410720 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:54:55.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.412893 systemd[1]: Reached target local-fs.target. Oct 2 19:54:55.420770 kernel: audit: type=1130 audit(1696276495.405:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.417675 systemd[1]: Reached target sysinit.target. Oct 2 19:54:55.424228 systemd[1]: Reached target basic.target. Oct 2 19:54:55.444199 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:54:55.507346 systemd-fsck[1141]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:54:55.512537 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:54:55.515798 systemd[1]: Mounting sysroot.mount... Oct 2 19:54:55.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.535665 kernel: audit: type=1130 audit(1696276495.512:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.543714 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:54:55.544316 systemd[1]: Mounted sysroot.mount. Oct 2 19:54:55.545459 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:54:55.556917 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:54:55.558638 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:54:55.558697 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:54:55.558726 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:54:55.564943 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:54:55.593349 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:54:55.596172 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:54:55.606259 initrd-setup-root[1163]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:54:55.617669 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1158) Oct 2 19:54:55.622177 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:54:55.622226 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:54:55.622238 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:54:55.627644 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:54:55.630526 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:54:55.634649 initrd-setup-root[1189]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:54:55.641912 initrd-setup-root[1197]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:54:55.648291 initrd-setup-root[1205]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:54:55.866951 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:54:55.877604 kernel: audit: type=1130 audit(1696276495.865:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.868411 systemd[1]: Starting ignition-mount.service... Oct 2 19:54:55.882012 systemd[1]: Starting sysroot-boot.service... Oct 2 19:54:55.885813 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:54:55.885944 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:54:55.920678 ignition[1225]: INFO : Ignition 2.14.0 Oct 2 19:54:55.920678 ignition[1225]: INFO : Stage: mount Oct 2 19:54:55.928788 ignition[1225]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:55.928788 ignition[1225]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:54:55.942838 kernel: audit: type=1130 audit(1696276495.927:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.927462 systemd[1]: Finished sysroot-boot.service. Oct 2 19:54:55.949209 ignition[1225]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:54:55.949209 ignition[1225]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:54:55.953242 ignition[1225]: INFO : PUT result: OK Oct 2 19:54:55.957411 ignition[1225]: INFO : mount: mount passed Oct 2 19:54:55.958726 ignition[1225]: INFO : Ignition finished successfully Oct 2 19:54:55.961119 systemd[1]: Finished ignition-mount.service. Oct 2 19:54:55.985460 kernel: audit: type=1130 audit(1696276495.963:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.969537 systemd[1]: Starting ignition-files.service... Oct 2 19:54:56.012430 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:54:56.031064 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1234) Oct 2 19:54:56.035504 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:54:56.035567 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:54:56.035587 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:54:56.043637 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:54:56.046612 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:54:56.059744 ignition[1253]: INFO : Ignition 2.14.0 Oct 2 19:54:56.059744 ignition[1253]: INFO : Stage: files Oct 2 19:54:56.062230 ignition[1253]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:56.062230 ignition[1253]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:54:56.075809 ignition[1253]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:54:56.077667 ignition[1253]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:54:56.079929 ignition[1253]: INFO : PUT result: OK Oct 2 19:54:56.084514 ignition[1253]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:54:56.091693 ignition[1253]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:54:56.091693 ignition[1253]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:54:56.106306 ignition[1253]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:54:56.108160 ignition[1253]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:54:56.110843 unknown[1253]: wrote ssh authorized keys file for user: core Oct 2 19:54:56.112404 ignition[1253]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:54:56.115458 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:54:56.118304 ignition[1253]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 19:54:56.294529 ignition[1253]: INFO : GET result: OK Oct 2 19:54:56.441179 systemd-networkd[1097]: eth0: Gained IPv6LL Oct 2 19:54:56.599482 ignition[1253]: DEBUG : file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 19:54:56.606137 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:54:56.606137 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:54:56.606137 ignition[1253]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 19:54:56.894760 ignition[1253]: INFO : GET result: OK Oct 2 19:54:57.041195 ignition[1253]: DEBUG : file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 19:54:57.044854 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:54:57.044854 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:54:57.044854 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:54:57.059818 ignition[1253]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3946007365" Oct 2 19:54:57.059818 ignition[1253]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3946007365": device or resource busy Oct 2 19:54:57.059818 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3946007365", trying btrfs: device or resource busy Oct 2 19:54:57.059818 ignition[1253]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3946007365" Oct 2 19:54:57.070372 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1258) Oct 2 19:54:57.070398 ignition[1253]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3946007365" Oct 2 19:54:57.070398 ignition[1253]: INFO : op(3): [started] unmounting "/mnt/oem3946007365" Oct 2 19:54:57.070398 ignition[1253]: INFO : op(3): [finished] unmounting "/mnt/oem3946007365" Oct 2 19:54:57.070398 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:54:57.070398 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:54:57.070398 ignition[1253]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:54:57.086925 systemd[1]: mnt-oem3946007365.mount: Deactivated successfully. Oct 2 19:54:57.157982 ignition[1253]: INFO : GET result: OK Oct 2 19:54:58.535202 ignition[1253]: DEBUG : file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 19:54:58.538853 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:54:58.538853 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:54:58.538853 ignition[1253]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:54:58.594891 ignition[1253]: INFO : GET result: OK Oct 2 19:55:00.429071 ignition[1253]: DEBUG : file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 19:55:00.433182 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:55:00.435813 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:55:00.438519 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:55:00.438519 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:55:00.444484 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:55:00.444484 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:55:00.451501 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:55:00.471328 ignition[1253]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4090569422" Oct 2 19:55:00.481461 ignition[1253]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4090569422": device or resource busy Oct 2 19:55:00.481461 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4090569422", trying btrfs: device or resource busy Oct 2 19:55:00.481461 ignition[1253]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4090569422" Oct 2 19:55:00.489351 ignition[1253]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4090569422" Oct 2 19:55:00.489351 ignition[1253]: INFO : op(6): [started] unmounting "/mnt/oem4090569422" Oct 2 19:55:00.493122 ignition[1253]: INFO : op(6): [finished] unmounting "/mnt/oem4090569422" Oct 2 19:55:00.493122 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:55:00.493122 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:55:00.493122 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:55:00.506334 ignition[1253]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2866053237" Oct 2 19:55:00.506334 ignition[1253]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2866053237": device or resource busy Oct 2 19:55:00.506334 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2866053237", trying btrfs: device or resource busy Oct 2 19:55:00.506334 ignition[1253]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2866053237" Oct 2 19:55:00.520772 ignition[1253]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2866053237" Oct 2 19:55:00.520772 ignition[1253]: INFO : op(9): [started] unmounting "/mnt/oem2866053237" Oct 2 19:55:00.520772 ignition[1253]: INFO : op(9): [finished] unmounting "/mnt/oem2866053237" Oct 2 19:55:00.520772 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:55:00.520772 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:55:00.520772 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:55:00.532166 systemd[1]: mnt-oem2866053237.mount: Deactivated successfully. Oct 2 19:55:00.574709 ignition[1253]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem610932982" Oct 2 19:55:00.576961 ignition[1253]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem610932982": device or resource busy Oct 2 19:55:00.576961 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem610932982", trying btrfs: device or resource busy Oct 2 19:55:00.576961 ignition[1253]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem610932982" Oct 2 19:55:00.587734 ignition[1253]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem610932982" Oct 2 19:55:00.587734 ignition[1253]: INFO : op(c): [started] unmounting "/mnt/oem610932982" Oct 2 19:55:00.587734 ignition[1253]: INFO : op(c): [finished] unmounting "/mnt/oem610932982" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(e): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(e): op(f): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(e): op(f): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(e): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(10): [started] processing unit "nvidia.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(10): [finished] processing unit "nvidia.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:55:00.587734 ignition[1253]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(15): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(15): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:55:00.653454 ignition[1253]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:55:00.653454 ignition[1253]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:55:00.653454 ignition[1253]: INFO : files: files passed Oct 2 19:55:00.653454 ignition[1253]: INFO : Ignition finished successfully Oct 2 19:55:00.651803 systemd[1]: Finished ignition-files.service. Oct 2 19:55:00.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.720501 kernel: audit: type=1130 audit(1696276500.703:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.721959 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:55:00.726301 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:55:00.729469 systemd[1]: Starting ignition-quench.service... Oct 2 19:55:00.741409 initrd-setup-root-after-ignition[1277]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:55:00.743144 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:55:00.743266 systemd[1]: Finished ignition-quench.service. Oct 2 19:55:00.783767 kernel: audit: type=1130 audit(1696276500.746:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.783880 kernel: audit: type=1131 audit(1696276500.746:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.748238 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:55:00.819314 kernel: audit: type=1130 audit(1696276500.782:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.784156 systemd[1]: Reached target ignition-complete.target. Oct 2 19:55:00.829302 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:55:00.878954 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:55:00.879091 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:55:00.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.881575 systemd[1]: Reached target initrd-fs.target. Oct 2 19:55:00.895079 kernel: audit: type=1130 audit(1696276500.879:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.895107 kernel: audit: type=1131 audit(1696276500.879:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.895087 systemd[1]: Reached target initrd.target. Oct 2 19:55:00.897519 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:55:00.900931 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:55:00.918942 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:55:00.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.922459 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:55:00.934350 kernel: audit: type=1130 audit(1696276500.919:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.945853 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:55:00.946141 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:55:00.950012 systemd[1]: Stopped target timers.target. Oct 2 19:55:00.954039 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:55:00.954220 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:55:00.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.963150 systemd[1]: Stopped target initrd.target. Oct 2 19:55:00.968863 systemd[1]: Stopped target basic.target. Oct 2 19:55:00.987510 kernel: audit: type=1131 audit(1696276500.961:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:00.987585 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:55:00.991491 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:55:00.996687 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:55:01.013021 systemd[1]: Stopped target remote-fs.target. Oct 2 19:55:01.022099 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:55:01.026049 systemd[1]: Stopped target sysinit.target. Oct 2 19:55:01.028553 systemd[1]: Stopped target local-fs.target. Oct 2 19:55:01.035123 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:55:01.036061 systemd[1]: Stopped target swap.target. Oct 2 19:55:01.052060 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:55:01.055419 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:55:01.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.064925 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:55:01.079379 kernel: audit: type=1131 audit(1696276501.063:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.083547 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:55:01.083771 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:55:01.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.088749 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:55:01.089472 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:55:01.120402 kernel: audit: type=1131 audit(1696276501.086:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.118137 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:55:01.118371 systemd[1]: Stopped ignition-files.service. Oct 2 19:55:01.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.138521 systemd[1]: Stopping ignition-mount.service... Oct 2 19:55:01.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.195888 ignition[1291]: INFO : Ignition 2.14.0 Oct 2 19:55:01.195888 ignition[1291]: INFO : Stage: umount Oct 2 19:55:01.145240 systemd[1]: Stopping iscsid.service... Oct 2 19:55:01.220480 iscsid[1102]: iscsid shutting down. Oct 2 19:55:01.224658 ignition[1291]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:55:01.224658 ignition[1291]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:55:01.156227 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:55:01.160951 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:55:01.161225 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:55:01.294830 ignition[1291]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:55:01.294830 ignition[1291]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:55:01.173040 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:55:01.177711 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:55:01.224336 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:55:01.224789 systemd[1]: Stopped iscsid.service. Oct 2 19:55:01.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.312497 ignition[1291]: INFO : PUT result: OK Oct 2 19:55:01.312927 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:55:01.316339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:55:01.316560 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:55:01.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.329752 ignition[1291]: INFO : umount: umount passed Oct 2 19:55:01.332138 ignition[1291]: INFO : Ignition finished successfully Oct 2 19:55:01.334563 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:55:01.336880 systemd[1]: Stopped ignition-mount.service. Oct 2 19:55:01.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.341678 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:55:01.342051 systemd[1]: Stopped ignition-disks.service. Oct 2 19:55:01.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.346194 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:55:01.346342 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:55:01.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.350848 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:55:01.351058 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:55:01.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.358112 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:55:01.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.358233 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:55:01.389694 systemd[1]: Stopped target paths.target. Oct 2 19:55:01.398736 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:55:01.406941 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:55:01.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.408856 systemd[1]: Stopped target slices.target. Oct 2 19:55:01.409837 systemd[1]: Stopped target sockets.target. Oct 2 19:55:01.412091 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:55:01.412138 systemd[1]: Closed iscsid.socket. Oct 2 19:55:01.415012 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:55:01.415075 systemd[1]: Stopped ignition-setup.service. Oct 2 19:55:01.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.417714 systemd[1]: Stopping iscsiuio.service... Oct 2 19:55:01.426133 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:55:01.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.428603 systemd[1]: Stopped iscsiuio.service. Oct 2 19:55:01.439037 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:55:01.439130 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:55:01.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.444255 systemd[1]: Stopped target network.target. Oct 2 19:55:01.449558 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:55:01.449607 systemd[1]: Closed iscsiuio.socket. Oct 2 19:55:01.451384 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:55:01.451444 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:55:01.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.452530 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:55:01.453225 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:55:01.459829 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:55:01.459954 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:55:01.461054 systemd-networkd[1097]: eth0: DHCPv6 lease lost Oct 2 19:55:01.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.481192 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:55:01.481329 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:55:01.489124 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:55:01.491516 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:55:01.499000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:55:01.500000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:55:01.499848 systemd[1]: Stopping network-cleanup.service... Oct 2 19:55:01.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.501761 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:55:01.501857 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:55:01.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.503484 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:55:01.503558 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:55:01.506596 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:55:01.506681 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:55:01.510334 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:55:01.530233 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:55:01.536809 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:55:01.537202 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:55:01.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.561535 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:55:01.561671 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:55:01.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.564009 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:55:01.564069 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:55:01.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.573835 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:55:01.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.581610 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:55:01.596773 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:55:01.596855 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:55:01.604159 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:55:01.604237 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:55:01.623761 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:55:01.630029 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:55:01.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.630114 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:55:01.634018 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:55:01.634106 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:55:01.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.671538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:55:01.671611 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:55:01.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.687126 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:55:01.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:01.687908 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:55:01.688042 systemd[1]: Stopped network-cleanup.service. Oct 2 19:55:01.692084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:55:01.692360 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:55:01.703049 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:55:01.718258 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:55:01.748258 systemd[1]: Switching root. Oct 2 19:55:01.779907 systemd-journald[185]: Journal stopped Oct 2 19:55:08.166188 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Oct 2 19:55:08.166269 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:55:08.166301 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:55:08.166323 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:55:08.166341 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:55:08.166364 kernel: SELinux: policy capability open_perms=1 Oct 2 19:55:08.166382 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:55:08.166401 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:55:08.166423 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:55:08.166440 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:55:08.166458 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:55:08.166476 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:55:08.166494 systemd[1]: Successfully loaded SELinux policy in 104.295ms. Oct 2 19:55:08.166527 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.976ms. Oct 2 19:55:08.166547 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:55:08.166654 systemd[1]: Detected virtualization amazon. Oct 2 19:55:08.166680 systemd[1]: Detected architecture x86-64. Oct 2 19:55:08.166699 systemd[1]: Detected first boot. Oct 2 19:55:08.166719 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:55:08.166739 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:55:08.166762 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:55:08.166786 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:55:08.166812 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:55:08.166835 kernel: kauditd_printk_skb: 40 callbacks suppressed Oct 2 19:55:08.166854 kernel: audit: type=1334 audit(1696276507.735:87): prog-id=12 op=LOAD Oct 2 19:55:08.166871 kernel: audit: type=1334 audit(1696276507.735:88): prog-id=3 op=UNLOAD Oct 2 19:55:08.166890 kernel: audit: type=1334 audit(1696276507.736:89): prog-id=13 op=LOAD Oct 2 19:55:08.166907 kernel: audit: type=1334 audit(1696276507.742:90): prog-id=14 op=LOAD Oct 2 19:55:08.166924 kernel: audit: type=1334 audit(1696276507.744:91): prog-id=4 op=UNLOAD Oct 2 19:55:08.166941 kernel: audit: type=1334 audit(1696276507.744:92): prog-id=5 op=UNLOAD Oct 2 19:55:08.166958 kernel: audit: type=1334 audit(1696276507.750:93): prog-id=15 op=LOAD Oct 2 19:55:08.166976 kernel: audit: type=1334 audit(1696276507.750:94): prog-id=12 op=UNLOAD Oct 2 19:55:08.166996 kernel: audit: type=1334 audit(1696276507.752:95): prog-id=16 op=LOAD Oct 2 19:55:08.167013 kernel: audit: type=1334 audit(1696276507.759:96): prog-id=17 op=LOAD Oct 2 19:55:08.167031 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:55:08.167050 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:55:08.167069 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:55:08.167088 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:55:08.167106 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:55:08.167126 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:55:08.167148 systemd[1]: Created slice system-getty.slice. Oct 2 19:55:08.167169 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:55:08.167188 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:55:08.167208 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:55:08.167227 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:55:08.167245 systemd[1]: Created slice user.slice. Oct 2 19:55:08.167264 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:55:08.167284 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:55:08.167306 systemd[1]: Set up automount boot.automount. Oct 2 19:55:08.167325 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:55:08.167344 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:55:08.167363 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:55:08.167380 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:55:08.167398 systemd[1]: Reached target integritysetup.target. Oct 2 19:55:08.167414 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:55:08.167432 systemd[1]: Reached target remote-fs.target. Oct 2 19:55:08.167451 systemd[1]: Reached target slices.target. Oct 2 19:55:08.167470 systemd[1]: Reached target swap.target. Oct 2 19:55:08.167492 systemd[1]: Reached target torcx.target. Oct 2 19:55:08.167511 systemd[1]: Reached target veritysetup.target. Oct 2 19:55:08.167530 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:55:08.167548 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:55:08.167566 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:55:08.167585 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:55:08.167607 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:55:08.167646 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:55:08.167664 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:55:08.167684 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:55:08.167702 systemd[1]: Mounting media.mount... Oct 2 19:55:08.167722 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:55:08.167741 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:55:08.167759 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:55:08.167780 systemd[1]: Mounting tmp.mount... Oct 2 19:55:08.167799 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:55:08.167817 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:55:08.167835 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:55:08.167853 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:55:08.167871 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:55:08.167891 systemd[1]: Starting modprobe@drm.service... Oct 2 19:55:08.167909 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:55:08.167927 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:55:08.167947 systemd[1]: Starting modprobe@loop.service... Oct 2 19:55:08.167966 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:55:08.167985 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:55:08.168004 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:55:08.168022 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:55:08.168040 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:55:08.168058 systemd[1]: Stopped systemd-journald.service. Oct 2 19:55:08.168076 systemd[1]: Starting systemd-journald.service... Oct 2 19:55:08.168096 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:55:08.168116 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:55:08.168135 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:55:08.168158 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:55:08.168178 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:55:08.168199 systemd[1]: Stopped verity-setup.service. Oct 2 19:55:08.168218 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:55:08.168236 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:55:08.168255 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:55:08.168273 systemd[1]: Mounted media.mount. Oct 2 19:55:08.168301 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:55:08.168320 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:55:08.168339 systemd[1]: Mounted tmp.mount. Oct 2 19:55:08.168358 kernel: loop: module loaded Oct 2 19:55:08.176554 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:55:08.176605 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:55:08.176815 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:55:08.176835 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:55:08.176854 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:55:08.176878 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:55:08.176897 systemd[1]: Finished modprobe@drm.service. Oct 2 19:55:08.176915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:55:08.176934 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:55:08.176953 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:55:08.176974 systemd[1]: Finished modprobe@loop.service. Oct 2 19:55:08.176992 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:55:08.177011 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:55:08.177030 systemd[1]: Reached target network-pre.target. Oct 2 19:55:08.177049 kernel: fuse: init (API version 7.34) Oct 2 19:55:08.177068 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:55:08.177089 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:55:08.177108 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:55:08.177127 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:55:08.177145 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:55:08.177172 systemd-journald[1401]: Journal started Oct 2 19:55:08.177255 systemd-journald[1401]: Runtime Journal (/run/log/journal/ec231752562d4014a637b19d05be742e) is 4.8M, max 38.7M, 33.9M free. Oct 2 19:55:02.507000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:55:02.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:55:02.697000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:55:02.697000 audit: BPF prog-id=10 op=LOAD Oct 2 19:55:02.697000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:55:02.697000 audit: BPF prog-id=11 op=LOAD Oct 2 19:55:02.697000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:55:07.735000 audit: BPF prog-id=12 op=LOAD Oct 2 19:55:07.735000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:55:07.736000 audit: BPF prog-id=13 op=LOAD Oct 2 19:55:07.742000 audit: BPF prog-id=14 op=LOAD Oct 2 19:55:07.744000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:55:07.744000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:55:07.750000 audit: BPF prog-id=15 op=LOAD Oct 2 19:55:07.750000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:55:07.752000 audit: BPF prog-id=16 op=LOAD Oct 2 19:55:07.759000 audit: BPF prog-id=17 op=LOAD Oct 2 19:55:07.759000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:55:07.759000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:55:07.763000 audit: BPF prog-id=18 op=LOAD Oct 2 19:55:07.763000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:55:07.765000 audit: BPF prog-id=19 op=LOAD Oct 2 19:55:07.766000 audit: BPF prog-id=20 op=LOAD Oct 2 19:55:07.766000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:55:07.766000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:55:07.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:07.773000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:55:07.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:07.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.014000 audit: BPF prog-id=21 op=LOAD Oct 2 19:55:08.015000 audit: BPF prog-id=22 op=LOAD Oct 2 19:55:08.015000 audit: BPF prog-id=23 op=LOAD Oct 2 19:55:08.015000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:55:08.015000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:55:08.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.148000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:55:08.148000 audit[1401]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd948666a0 a2=4000 a3=7ffd9486673c items=0 ppid=1 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:08.148000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:55:08.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:07.732654 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:55:03.100993 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:55:07.768512 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:55:03.102085 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:55:03.102107 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:55:03.102142 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:55:03.102153 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:55:08.189721 systemd[1]: Started systemd-journald.service. Oct 2 19:55:08.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.102189 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:55:08.185610 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:55:03.102203 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:55:03.102393 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:55:03.102435 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:55:08.190465 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:55:03.102448 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:55:03.104580 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:55:03.104634 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:55:03.104662 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:55:03.104678 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:55:03.104696 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:55:03.104710 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:55:07.015380 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:55:07.016065 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:55:07.016762 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:55:07.017051 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:55:07.017137 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:55:08.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:07.017226 /usr/lib/systemd/system-generators/torcx-generator[1325]: time="2023-10-02T19:55:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:55:08.199466 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:55:08.202064 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:55:08.203774 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:55:08.209451 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:55:08.212745 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:55:08.214807 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:55:08.216993 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:55:08.238528 systemd-journald[1401]: Time spent on flushing to /var/log/journal/ec231752562d4014a637b19d05be742e is 108.854ms for 1207 entries. Oct 2 19:55:08.238528 systemd-journald[1401]: System Journal (/var/log/journal/ec231752562d4014a637b19d05be742e) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:55:08.366421 systemd-journald[1401]: Received client request to flush runtime journal. Oct 2 19:55:08.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.245598 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:55:08.382070 udevadm[1434]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:55:08.260534 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:55:08.262537 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:55:08.312950 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:55:08.316028 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:55:08.358830 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:55:08.361962 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:55:08.369999 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:55:08.468964 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:55:08.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.471788 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:55:08.527696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:55:08.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.153213 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:55:09.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.153000 audit: BPF prog-id=24 op=LOAD Oct 2 19:55:09.153000 audit: BPF prog-id=25 op=LOAD Oct 2 19:55:09.153000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:55:09.153000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:55:09.156351 systemd[1]: Starting systemd-udevd.service... Oct 2 19:55:09.177586 systemd-udevd[1444]: Using default interface naming scheme 'v252'. Oct 2 19:55:09.227994 systemd[1]: Started systemd-udevd.service. Oct 2 19:55:09.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.232000 audit: BPF prog-id=26 op=LOAD Oct 2 19:55:09.234090 systemd[1]: Starting systemd-networkd.service... Oct 2 19:55:09.251000 audit: BPF prog-id=27 op=LOAD Oct 2 19:55:09.252000 audit: BPF prog-id=28 op=LOAD Oct 2 19:55:09.252000 audit: BPF prog-id=29 op=LOAD Oct 2 19:55:09.255804 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:55:09.330100 systemd[1]: Started systemd-userdbd.service. Oct 2 19:55:09.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.359505 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:55:09.432644 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 2 19:55:09.433942 (udev-worker)[1455]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:55:09.445235 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:55:09.445368 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Oct 2 19:55:09.451899 kernel: ACPI: button: Sleep Button [SLPF] Oct 2 19:55:09.496685 systemd-networkd[1452]: lo: Link UP Oct 2 19:55:09.496697 systemd-networkd[1452]: lo: Gained carrier Oct 2 19:55:09.497265 systemd-networkd[1452]: Enumeration completed Oct 2 19:55:09.497387 systemd[1]: Started systemd-networkd.service. Oct 2 19:55:09.499122 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:55:09.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.502771 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:55:09.516648 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:55:09.516880 systemd-networkd[1452]: eth0: Link UP Oct 2 19:55:09.517059 systemd-networkd[1452]: eth0: Gained carrier Oct 2 19:55:09.439000 audit[1449]: AVC avc: denied { confidentiality } for pid=1449 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:55:09.529822 systemd-networkd[1452]: eth0: DHCPv4 address 172.31.18.171/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:55:09.439000 audit[1449]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b44b74db60 a1=32194 a2=7f2dc6eb5bc5 a3=5 items=106 ppid=1444 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:09.439000 audit: CWD cwd="/" Oct 2 19:55:09.439000 audit: PATH item=0 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=1 name=(null) inode=13915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=2 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=3 name=(null) inode=13916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=4 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=5 name=(null) inode=13917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=6 name=(null) inode=13917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=7 name=(null) inode=13918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=8 name=(null) inode=13917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=9 name=(null) inode=13919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=10 name=(null) inode=13917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=11 name=(null) inode=13920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=12 name=(null) inode=13917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=13 name=(null) inode=13921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=14 name=(null) inode=13917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=15 name=(null) inode=13922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=16 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=17 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=18 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=19 name=(null) inode=13924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=20 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=21 name=(null) inode=13925 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=22 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=23 name=(null) inode=13926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=24 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=25 name=(null) inode=13927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=26 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=27 name=(null) inode=13928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=28 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=29 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=30 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=31 name=(null) inode=13930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=32 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=33 name=(null) inode=13931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=34 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=35 name=(null) inode=13932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=36 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=37 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=38 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=39 name=(null) inode=13934 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=40 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=41 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=42 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=43 name=(null) inode=13936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=44 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=45 name=(null) inode=13937 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=46 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=47 name=(null) inode=13938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=48 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=49 name=(null) inode=13939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=50 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=51 name=(null) inode=13940 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=52 name=(null) inode=41 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=53 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=54 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=55 name=(null) inode=13942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=56 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=57 name=(null) inode=13943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=58 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=59 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=60 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=61 name=(null) inode=13945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=62 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=63 name=(null) inode=13946 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=64 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=65 name=(null) inode=13947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=66 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=67 name=(null) inode=13948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=68 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=69 name=(null) inode=13949 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=70 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=71 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=72 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=73 name=(null) inode=13951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=74 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=75 name=(null) inode=13952 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=76 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=77 name=(null) inode=13953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=78 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=79 name=(null) inode=13954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=80 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=81 name=(null) inode=13955 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=82 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=83 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=84 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=85 name=(null) inode=13957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=86 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=87 name=(null) inode=13958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=88 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=89 name=(null) inode=13959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=90 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=91 name=(null) inode=13960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=92 name=(null) inode=13956 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=93 name=(null) inode=13961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=94 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=95 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=96 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=97 name=(null) inode=13963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=98 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=99 name=(null) inode=13964 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=100 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=101 name=(null) inode=13965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=102 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=103 name=(null) inode=13966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=104 name=(null) inode=13962 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PATH item=105 name=(null) inode=13967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.439000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:55:09.572748 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Oct 2 19:55:09.580728 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Oct 2 19:55:09.591682 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:55:09.685678 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1446) Oct 2 19:55:09.817387 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:55:09.856118 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:55:09.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.859387 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:55:09.901133 lvm[1558]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:55:09.927982 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:55:09.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.929542 systemd[1]: Reached target cryptsetup.target. Oct 2 19:55:09.932444 systemd[1]: Starting lvm2-activation.service... Oct 2 19:55:09.941610 lvm[1559]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:55:09.977092 systemd[1]: Finished lvm2-activation.service. Oct 2 19:55:09.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.980835 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:55:09.984983 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:55:09.985024 systemd[1]: Reached target local-fs.target. Oct 2 19:55:09.986753 systemd[1]: Reached target machines.target. Oct 2 19:55:09.990088 systemd[1]: Starting ldconfig.service... Oct 2 19:55:09.993546 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:55:09.993611 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:55:09.995791 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:55:09.999355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:55:10.004653 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:55:10.006505 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:55:10.006570 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:55:10.010527 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:55:10.032546 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1561 (bootctl) Oct 2 19:55:10.034818 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:55:10.063441 systemd-tmpfiles[1564]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:55:10.064492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:55:10.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.066885 systemd-tmpfiles[1564]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:55:10.070395 systemd-tmpfiles[1564]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:55:10.203493 systemd-fsck[1569]: fsck.fat 4.2 (2021-01-31) Oct 2 19:55:10.203493 systemd-fsck[1569]: /dev/nvme0n1p1: 789 files, 115069/258078 clusters Oct 2 19:55:10.206033 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:55:10.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.210021 systemd[1]: Mounting boot.mount... Oct 2 19:55:10.229861 systemd[1]: Mounted boot.mount. Oct 2 19:55:10.264907 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:55:10.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.362971 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:55:10.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.366237 systemd[1]: Starting audit-rules.service... Oct 2 19:55:10.370647 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:55:10.377000 audit: BPF prog-id=30 op=LOAD Oct 2 19:55:10.373950 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:55:10.389000 audit: BPF prog-id=31 op=LOAD Oct 2 19:55:10.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.386088 systemd[1]: Starting systemd-resolved.service... Oct 2 19:55:10.392741 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:55:10.397151 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:55:10.402970 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:55:10.423000 audit[1589]: SYSTEM_BOOT pid=1589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.405047 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:55:10.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.434435 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:55:10.585000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:55:10.585000 audit[1603]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff44b85180 a2=420 a3=0 items=0 ppid=1583 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:10.585000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:55:10.589881 augenrules[1603]: No rules Oct 2 19:55:10.588332 systemd[1]: Finished audit-rules.service. Oct 2 19:55:10.598533 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:55:10.612350 systemd-resolved[1587]: Positive Trust Anchors: Oct 2 19:55:10.612369 systemd-resolved[1587]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:55:10.612422 systemd-resolved[1587]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:55:10.649475 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:55:10.651955 systemd[1]: Reached target time-set.target. Oct 2 19:55:10.665924 systemd-resolved[1587]: Defaulting to hostname 'linux'. Oct 2 19:55:10.669561 systemd[1]: Started systemd-resolved.service. Oct 2 19:55:10.670992 systemd[1]: Reached target network.target. Oct 2 19:55:10.672339 systemd[1]: Reached target nss-lookup.target. Oct 2 19:55:10.711777 systemd-networkd[1452]: eth0: Gained IPv6LL Oct 2 19:55:10.713556 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:55:10.714935 systemd[1]: Reached target network-online.target. Oct 2 19:55:11.375238 systemd-resolved[1587]: Clock change detected. Flushing caches. Oct 2 19:55:11.378317 systemd-timesyncd[1588]: Contacted time server 45.83.234.123:123 (0.flatcar.pool.ntp.org). Oct 2 19:55:11.378779 systemd-timesyncd[1588]: Initial clock synchronization to Mon 2023-10-02 19:55:11.375110 UTC. Oct 2 19:55:11.679645 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:55:11.680610 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:55:11.940055 ldconfig[1560]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:55:11.944377 systemd[1]: Finished ldconfig.service. Oct 2 19:55:11.946899 systemd[1]: Starting systemd-update-done.service... Oct 2 19:55:11.954538 systemd[1]: Finished systemd-update-done.service. Oct 2 19:55:11.955890 systemd[1]: Reached target sysinit.target. Oct 2 19:55:11.957206 systemd[1]: Started motdgen.path. Oct 2 19:55:11.958174 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:55:11.959722 systemd[1]: Started logrotate.timer. Oct 2 19:55:11.960779 systemd[1]: Started mdadm.timer. Oct 2 19:55:11.961673 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:55:11.962975 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:55:11.963004 systemd[1]: Reached target paths.target. Oct 2 19:55:11.963957 systemd[1]: Reached target timers.target. Oct 2 19:55:11.965327 systemd[1]: Listening on dbus.socket. Oct 2 19:55:11.967407 systemd[1]: Starting docker.socket... Oct 2 19:55:11.971443 systemd[1]: Listening on sshd.socket. Oct 2 19:55:11.972806 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:55:11.973487 systemd[1]: Listening on docker.socket. Oct 2 19:55:11.974833 systemd[1]: Reached target sockets.target. Oct 2 19:55:11.975909 systemd[1]: Reached target basic.target. Oct 2 19:55:11.976946 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:55:11.977029 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:55:11.978247 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:55:11.980782 systemd[1]: Starting containerd.service... Oct 2 19:55:11.984435 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:55:11.987432 systemd[1]: Starting dbus.service... Oct 2 19:55:11.991913 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:55:11.994218 systemd[1]: Starting extend-filesystems.service... Oct 2 19:55:11.996255 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:55:12.001067 systemd[1]: Starting motdgen.service... Oct 2 19:55:12.004231 systemd[1]: Started nvidia.service. Oct 2 19:55:12.007592 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:55:12.011250 systemd[1]: Starting prepare-critools.service... Oct 2 19:55:12.014254 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:55:12.017967 systemd[1]: Starting sshd-keygen.service... Oct 2 19:55:12.025254 systemd[1]: Starting systemd-logind.service... Oct 2 19:55:12.094953 jq[1619]: false Oct 2 19:55:12.026576 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:55:12.026651 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:55:12.027344 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:55:12.028453 systemd[1]: Starting update-engine.service... Oct 2 19:55:12.036301 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:55:12.076286 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:55:12.076579 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:55:12.097672 jq[1629]: true Oct 2 19:55:12.093743 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:55:12.093956 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:55:12.180103 tar[1631]: ./ Oct 2 19:55:12.180103 tar[1631]: ./macvlan Oct 2 19:55:12.203811 tar[1632]: crictl Oct 2 19:55:12.246951 dbus-daemon[1618]: [system] SELinux support is enabled Oct 2 19:55:12.247518 systemd[1]: Started dbus.service. Oct 2 19:55:12.253510 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:55:12.255002 jq[1634]: true Oct 2 19:55:12.256855 extend-filesystems[1620]: Found nvme0n1 Oct 2 19:55:12.256855 extend-filesystems[1620]: Found nvme0n1p1 Oct 2 19:55:12.256855 extend-filesystems[1620]: Found nvme0n1p2 Oct 2 19:55:12.256855 extend-filesystems[1620]: Found nvme0n1p3 Oct 2 19:55:12.256855 extend-filesystems[1620]: Found usr Oct 2 19:55:12.256855 extend-filesystems[1620]: Found nvme0n1p4 Oct 2 19:55:12.256855 extend-filesystems[1620]: Found nvme0n1p6 Oct 2 19:55:12.256855 extend-filesystems[1620]: Found nvme0n1p7 Oct 2 19:55:12.256855 extend-filesystems[1620]: Found nvme0n1p9 Oct 2 19:55:12.256855 extend-filesystems[1620]: Checking size of /dev/nvme0n1p9 Oct 2 19:55:12.253556 systemd[1]: Reached target system-config.target. Oct 2 19:55:12.255438 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:55:12.255471 systemd[1]: Reached target user-config.target. Oct 2 19:55:12.299850 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:55:12.300108 systemd[1]: Finished motdgen.service. Oct 2 19:55:12.336268 dbus-daemon[1618]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1452 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:55:12.344107 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:55:12.380239 extend-filesystems[1620]: Resized partition /dev/nvme0n1p9 Oct 2 19:55:12.412523 extend-filesystems[1683]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:55:12.438480 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:55:12.508119 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:55:12.531018 bash[1675]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:55:12.531244 update_engine[1628]: I1002 19:55:12.528200 1628 main.cc:92] Flatcar Update Engine starting Oct 2 19:55:12.515894 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:55:12.535494 extend-filesystems[1683]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:55:12.535494 extend-filesystems[1683]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:55:12.535494 extend-filesystems[1683]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:55:12.586806 extend-filesystems[1620]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:55:12.591304 update_engine[1628]: I1002 19:55:12.565548 1628 update_check_scheduler.cc:74] Next update check in 8m57s Oct 2 19:55:12.537739 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:55:12.591486 amazon-ssm-agent[1615]: 2023/10/02 19:55:12 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:55:12.591486 amazon-ssm-agent[1615]: Initializing new seelog logger Oct 2 19:55:12.591486 amazon-ssm-agent[1615]: New Seelog Logger Creation Complete Oct 2 19:55:12.591486 amazon-ssm-agent[1615]: 2023/10/02 19:55:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:55:12.591486 amazon-ssm-agent[1615]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:55:12.591486 amazon-ssm-agent[1615]: 2023/10/02 19:55:12 processing appconfig overrides Oct 2 19:55:12.537963 systemd[1]: Finished extend-filesystems.service. Oct 2 19:55:12.565351 systemd[1]: Started update-engine.service. Oct 2 19:55:12.579866 systemd[1]: Started locksmithd.service. Oct 2 19:55:12.645970 systemd-logind[1627]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:55:12.646944 systemd-logind[1627]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 2 19:55:12.647215 systemd-logind[1627]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:55:12.647844 systemd-logind[1627]: New seat seat0. Oct 2 19:55:12.648470 env[1633]: time="2023-10-02T19:55:12.648419840Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:55:12.661832 systemd[1]: Started systemd-logind.service. Oct 2 19:55:12.711870 tar[1631]: ./static Oct 2 19:55:12.769456 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:55:12.860014 env[1633]: time="2023-10-02T19:55:12.859926992Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:55:12.869417 env[1633]: time="2023-10-02T19:55:12.869304845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:12.872591 env[1633]: time="2023-10-02T19:55:12.872531366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:55:12.882326 tar[1631]: ./vlan Oct 2 19:55:12.882630 env[1633]: time="2023-10-02T19:55:12.882583552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:12.883210 env[1633]: time="2023-10-02T19:55:12.883174991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:55:12.883370 env[1633]: time="2023-10-02T19:55:12.883348644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:12.883485 env[1633]: time="2023-10-02T19:55:12.883466781Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:55:12.883572 env[1633]: time="2023-10-02T19:55:12.883557662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:12.883795 env[1633]: time="2023-10-02T19:55:12.883767537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:12.884245 env[1633]: time="2023-10-02T19:55:12.884216682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:12.884713 env[1633]: time="2023-10-02T19:55:12.884679880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:55:12.884824 env[1633]: time="2023-10-02T19:55:12.884807487Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:55:12.884993 env[1633]: time="2023-10-02T19:55:12.884975472Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:55:12.885101 env[1633]: time="2023-10-02T19:55:12.885084764Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:55:12.890395 dbus-daemon[1618]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:55:12.891145 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:55:12.892107 env[1633]: time="2023-10-02T19:55:12.892026574Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:55:12.892233 env[1633]: time="2023-10-02T19:55:12.892216280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:55:12.892320 env[1633]: time="2023-10-02T19:55:12.892291117Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:55:12.892442 env[1633]: time="2023-10-02T19:55:12.892428112Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:55:12.892576 env[1633]: time="2023-10-02T19:55:12.892562523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:55:12.892645 env[1633]: time="2023-10-02T19:55:12.892633018Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:55:12.892734 env[1633]: time="2023-10-02T19:55:12.892721347Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:55:12.892913 env[1633]: time="2023-10-02T19:55:12.892866538Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:55:12.893122 env[1633]: time="2023-10-02T19:55:12.893103071Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:55:12.893216 env[1633]: time="2023-10-02T19:55:12.893198084Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:55:12.893491 env[1633]: time="2023-10-02T19:55:12.893472618Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:55:12.893584 env[1633]: time="2023-10-02T19:55:12.893568432Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:55:12.893921 env[1633]: time="2023-10-02T19:55:12.893898838Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:55:12.894145 env[1633]: time="2023-10-02T19:55:12.894127435Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:55:12.894954 dbus-daemon[1618]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1676 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:55:12.899313 systemd[1]: Starting polkit.service... Oct 2 19:55:12.911321 env[1633]: time="2023-10-02T19:55:12.911273131Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:55:12.911517 env[1633]: time="2023-10-02T19:55:12.911499208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.911617 env[1633]: time="2023-10-02T19:55:12.911600366Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:55:12.911765 env[1633]: time="2023-10-02T19:55:12.911747397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.911996 env[1633]: time="2023-10-02T19:55:12.911975518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.912209 env[1633]: time="2023-10-02T19:55:12.912084232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.912332 env[1633]: time="2023-10-02T19:55:12.912313587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.912412 env[1633]: time="2023-10-02T19:55:12.912397597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.912483 env[1633]: time="2023-10-02T19:55:12.912470116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.912551 env[1633]: time="2023-10-02T19:55:12.912538797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.912618 env[1633]: time="2023-10-02T19:55:12.912604278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.912721 env[1633]: time="2023-10-02T19:55:12.912694225Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:55:12.912975 env[1633]: time="2023-10-02T19:55:12.912955909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.913064 env[1633]: time="2023-10-02T19:55:12.913049414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.913165 env[1633]: time="2023-10-02T19:55:12.913149262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.913240 env[1633]: time="2023-10-02T19:55:12.913226394Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:55:12.913385 env[1633]: time="2023-10-02T19:55:12.913309111Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:55:12.913467 env[1633]: time="2023-10-02T19:55:12.913452841Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:55:12.913551 env[1633]: time="2023-10-02T19:55:12.913534201Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:55:12.913648 env[1633]: time="2023-10-02T19:55:12.913633728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:55:12.914050 env[1633]: time="2023-10-02T19:55:12.913983104Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:55:12.917188 env[1633]: time="2023-10-02T19:55:12.914267849Z" level=info msg="Connect containerd service" Oct 2 19:55:12.917188 env[1633]: time="2023-10-02T19:55:12.914322969Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:55:12.917188 env[1633]: time="2023-10-02T19:55:12.915294790Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:55:12.920100 env[1633]: time="2023-10-02T19:55:12.920049520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:55:12.920293 env[1633]: time="2023-10-02T19:55:12.920275828Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:55:12.920494 systemd[1]: Started containerd.service. Oct 2 19:55:12.920784 env[1633]: time="2023-10-02T19:55:12.920764901Z" level=info msg="containerd successfully booted in 0.396756s" Oct 2 19:55:12.926668 env[1633]: time="2023-10-02T19:55:12.926492939Z" level=info msg="Start subscribing containerd event" Oct 2 19:55:12.953668 polkitd[1724]: Started polkitd version 121 Oct 2 19:55:12.982563 env[1633]: time="2023-10-02T19:55:12.982512541Z" level=info msg="Start recovering state" Oct 2 19:55:12.983445 env[1633]: time="2023-10-02T19:55:12.983390134Z" level=info msg="Start event monitor" Oct 2 19:55:12.983950 env[1633]: time="2023-10-02T19:55:12.983926229Z" level=info msg="Start snapshots syncer" Oct 2 19:55:12.984052 env[1633]: time="2023-10-02T19:55:12.984034472Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:55:12.984193 env[1633]: time="2023-10-02T19:55:12.984177024Z" level=info msg="Start streaming server" Oct 2 19:55:13.006792 polkitd[1724]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:55:13.006886 polkitd[1724]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:55:13.032686 polkitd[1724]: Finished loading, compiling and executing 2 rules Oct 2 19:55:13.033724 dbus-daemon[1618]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:55:13.034232 systemd[1]: Started polkit.service. Oct 2 19:55:13.037273 polkitd[1724]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:55:13.097964 systemd-hostnamed[1676]: Hostname set to (transient) Oct 2 19:55:13.098116 systemd-resolved[1587]: System hostname changed to 'ip-172-31-18-171'. Oct 2 19:55:13.166375 tar[1631]: ./portmap Oct 2 19:55:13.299235 coreos-metadata[1617]: Oct 02 19:55:13.296 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:55:13.304960 coreos-metadata[1617]: Oct 02 19:55:13.304 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:55:13.306576 coreos-metadata[1617]: Oct 02 19:55:13.306 INFO Fetch successful Oct 2 19:55:13.306752 coreos-metadata[1617]: Oct 02 19:55:13.306 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:55:13.311310 coreos-metadata[1617]: Oct 02 19:55:13.311 INFO Fetch successful Oct 2 19:55:13.313919 unknown[1617]: wrote ssh authorized keys file for user: core Oct 2 19:55:13.344099 update-ssh-keys[1790]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:55:13.344885 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:55:13.415248 tar[1631]: ./host-local Oct 2 19:55:13.534050 amazon-ssm-agent[1615]: 2023-10-02 19:55:13 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-07983d4f322ce3793 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-07983d4f322ce3793 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:55:13.534050 amazon-ssm-agent[1615]: status code: 400, request id: 39a10b46-eea1-4829-823a-dea195a79eab Oct 2 19:55:13.534050 amazon-ssm-agent[1615]: 2023-10-02 19:55:13 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:55:13.554517 tar[1631]: ./vrf Oct 2 19:55:13.729024 tar[1631]: ./bridge Oct 2 19:55:13.815810 systemd[1]: Finished prepare-critools.service. Oct 2 19:55:13.840755 tar[1631]: ./tuning Oct 2 19:55:13.879431 tar[1631]: ./firewall Oct 2 19:55:13.929943 tar[1631]: ./host-device Oct 2 19:55:13.980207 tar[1631]: ./sbr Oct 2 19:55:14.007421 sshd_keygen[1654]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:55:14.031488 tar[1631]: ./loopback Oct 2 19:55:14.054013 systemd[1]: Finished sshd-keygen.service. Oct 2 19:55:14.057818 systemd[1]: Starting issuegen.service... Oct 2 19:55:14.069839 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:55:14.070063 systemd[1]: Finished issuegen.service. Oct 2 19:55:14.073987 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:55:14.080101 tar[1631]: ./dhcp Oct 2 19:55:14.083468 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:55:14.087604 systemd[1]: Started getty@tty1.service. Oct 2 19:55:14.090532 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:55:14.092313 systemd[1]: Reached target getty.target. Oct 2 19:55:14.203117 tar[1631]: ./ptp Oct 2 19:55:14.256767 tar[1631]: ./ipvlan Oct 2 19:55:14.292408 locksmithd[1698]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:55:14.327766 tar[1631]: ./bandwidth Oct 2 19:55:14.414590 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:55:14.416250 systemd[1]: Reached target multi-user.target. Oct 2 19:55:14.419376 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:55:14.428850 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:55:14.429085 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:55:14.430693 systemd[1]: Startup finished in 908ms (kernel) + 11.530s (initrd) + 11.520s (userspace) = 23.959s. Oct 2 19:55:21.954000 systemd[1]: Created slice system-sshd.slice. Oct 2 19:55:21.956261 systemd[1]: Started sshd@0-172.31.18.171:22-139.178.89.65:37428.service. Oct 2 19:55:22.146037 sshd[1825]: Accepted publickey for core from 139.178.89.65 port 37428 ssh2: RSA SHA256:KIgN61+zXp+dlF5wFSRCN73yCMDWObQthNuB0BDLhpo Oct 2 19:55:22.148468 sshd[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:22.160843 systemd[1]: Created slice user-500.slice. Oct 2 19:55:22.162508 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:55:22.167007 systemd-logind[1627]: New session 1 of user core. Oct 2 19:55:22.174476 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:55:22.177539 systemd[1]: Starting user@500.service... Oct 2 19:55:22.181732 (systemd)[1828]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:22.269542 systemd[1828]: Queued start job for default target default.target. Oct 2 19:55:22.270275 systemd[1828]: Reached target paths.target. Oct 2 19:55:22.270309 systemd[1828]: Reached target sockets.target. Oct 2 19:55:22.270327 systemd[1828]: Reached target timers.target. Oct 2 19:55:22.270344 systemd[1828]: Reached target basic.target. Oct 2 19:55:22.270465 systemd[1]: Started user@500.service. Oct 2 19:55:22.271670 systemd[1]: Started session-1.scope. Oct 2 19:55:22.272337 systemd[1828]: Reached target default.target. Oct 2 19:55:22.272544 systemd[1828]: Startup finished in 83ms. Oct 2 19:55:22.415013 systemd[1]: Started sshd@1-172.31.18.171:22-139.178.89.65:37432.service. Oct 2 19:55:22.581232 sshd[1837]: Accepted publickey for core from 139.178.89.65 port 37432 ssh2: RSA SHA256:KIgN61+zXp+dlF5wFSRCN73yCMDWObQthNuB0BDLhpo Oct 2 19:55:22.582866 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:22.588150 systemd-logind[1627]: New session 2 of user core. Oct 2 19:55:22.589777 systemd[1]: Started session-2.scope. Oct 2 19:55:22.717909 sshd[1837]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:22.720838 systemd[1]: sshd@1-172.31.18.171:22-139.178.89.65:37432.service: Deactivated successfully. Oct 2 19:55:22.721715 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:55:22.722581 systemd-logind[1627]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:55:22.723454 systemd-logind[1627]: Removed session 2. Oct 2 19:55:22.745134 systemd[1]: Started sshd@2-172.31.18.171:22-139.178.89.65:37434.service. Oct 2 19:55:22.911361 sshd[1843]: Accepted publickey for core from 139.178.89.65 port 37434 ssh2: RSA SHA256:KIgN61+zXp+dlF5wFSRCN73yCMDWObQthNuB0BDLhpo Oct 2 19:55:22.912496 sshd[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:22.921644 systemd[1]: Started session-3.scope. Oct 2 19:55:22.922467 systemd-logind[1627]: New session 3 of user core. Oct 2 19:55:23.043006 sshd[1843]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:23.046651 systemd[1]: sshd@2-172.31.18.171:22-139.178.89.65:37434.service: Deactivated successfully. Oct 2 19:55:23.047581 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:55:23.048368 systemd-logind[1627]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:55:23.049527 systemd-logind[1627]: Removed session 3. Oct 2 19:55:23.069182 systemd[1]: Started sshd@3-172.31.18.171:22-139.178.89.65:37448.service. Oct 2 19:55:23.235605 sshd[1849]: Accepted publickey for core from 139.178.89.65 port 37448 ssh2: RSA SHA256:KIgN61+zXp+dlF5wFSRCN73yCMDWObQthNuB0BDLhpo Oct 2 19:55:23.241369 sshd[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:23.249241 systemd[1]: Started session-4.scope. Oct 2 19:55:23.249741 systemd-logind[1627]: New session 4 of user core. Oct 2 19:55:23.393328 sshd[1849]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:23.399890 systemd[1]: sshd@3-172.31.18.171:22-139.178.89.65:37448.service: Deactivated successfully. Oct 2 19:55:23.402628 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:55:23.405656 systemd-logind[1627]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:55:23.410886 systemd-logind[1627]: Removed session 4. Oct 2 19:55:23.424329 systemd[1]: Started sshd@4-172.31.18.171:22-139.178.89.65:37462.service. Oct 2 19:55:23.595668 sshd[1855]: Accepted publickey for core from 139.178.89.65 port 37462 ssh2: RSA SHA256:KIgN61+zXp+dlF5wFSRCN73yCMDWObQthNuB0BDLhpo Oct 2 19:55:23.596814 sshd[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:23.601961 systemd[1]: Started session-5.scope. Oct 2 19:55:23.602637 systemd-logind[1627]: New session 5 of user core. Oct 2 19:55:23.717791 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:55:23.718102 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:55:23.726306 dbus-daemon[1618]: \xd0=: received setenforce notice (enforcing=-1789586704) Oct 2 19:55:23.728390 sudo[1858]: pam_unix(sudo:session): session closed for user root Oct 2 19:55:23.753032 sshd[1855]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:23.757892 systemd[1]: sshd@4-172.31.18.171:22-139.178.89.65:37462.service: Deactivated successfully. Oct 2 19:55:23.758956 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:55:23.759825 systemd-logind[1627]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:55:23.760926 systemd-logind[1627]: Removed session 5. Oct 2 19:55:23.780732 systemd[1]: Started sshd@5-172.31.18.171:22-139.178.89.65:37464.service. Oct 2 19:55:23.952961 sshd[1862]: Accepted publickey for core from 139.178.89.65 port 37464 ssh2: RSA SHA256:KIgN61+zXp+dlF5wFSRCN73yCMDWObQthNuB0BDLhpo Oct 2 19:55:23.953859 sshd[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:23.966104 systemd-logind[1627]: New session 6 of user core. Oct 2 19:55:23.967043 systemd[1]: Started session-6.scope. Oct 2 19:55:24.075912 sudo[1866]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:55:24.076223 sudo[1866]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:55:24.079936 sudo[1866]: pam_unix(sudo:session): session closed for user root Oct 2 19:55:24.085823 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:55:24.086130 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:55:24.102453 systemd[1]: Stopping audit-rules.service... Oct 2 19:55:24.103000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:55:24.105897 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:55:24.105958 kernel: audit: type=1305 audit(1696276524.103:170): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:55:24.103000 audit[1869]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdafde5370 a2=420 a3=0 items=0 ppid=1 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:24.116686 kernel: audit: type=1300 audit(1696276524.103:170): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdafde5370 a2=420 a3=0 items=0 ppid=1 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:24.116785 kernel: audit: type=1327 audit(1696276524.103:170): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:55:24.103000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:55:24.116887 auditctl[1869]: No rules Oct 2 19:55:24.127404 kernel: audit: type=1131 audit(1696276524.116:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.118550 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:55:24.119216 systemd[1]: Stopped audit-rules.service. Oct 2 19:55:24.127030 systemd[1]: Starting audit-rules.service... Oct 2 19:55:24.153304 augenrules[1886]: No rules Oct 2 19:55:24.154011 systemd[1]: Finished audit-rules.service. Oct 2 19:55:24.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.155302 sudo[1865]: pam_unix(sudo:session): session closed for user root Oct 2 19:55:24.154000 audit[1865]: USER_END pid=1865 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.166127 kernel: audit: type=1130 audit(1696276524.153:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.166207 kernel: audit: type=1106 audit(1696276524.154:173): pid=1865 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.166254 kernel: audit: type=1104 audit(1696276524.154:174): pid=1865 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.154000 audit[1865]: CRED_DISP pid=1865 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.179564 sshd[1862]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:24.180000 audit[1862]: USER_END pid=1862 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:24.183481 systemd[1]: sshd@5-172.31.18.171:22-139.178.89.65:37464.service: Deactivated successfully. Oct 2 19:55:24.184465 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:55:24.186158 systemd-logind[1627]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:55:24.187478 systemd-logind[1627]: Removed session 6. Oct 2 19:55:24.180000 audit[1862]: CRED_DISP pid=1862 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:24.195247 kernel: audit: type=1106 audit(1696276524.180:175): pid=1862 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:24.195365 kernel: audit: type=1104 audit(1696276524.180:176): pid=1862 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:24.195401 kernel: audit: type=1131 audit(1696276524.181:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.171:22-139.178.89.65:37464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.18.171:22-139.178.89.65:37464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.204517 systemd[1]: Started sshd@6-172.31.18.171:22-139.178.89.65:37476.service. Oct 2 19:55:24.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.171:22-139.178.89.65:37476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.362000 audit[1892]: USER_ACCT pid=1892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:24.363978 sshd[1892]: Accepted publickey for core from 139.178.89.65 port 37476 ssh2: RSA SHA256:KIgN61+zXp+dlF5wFSRCN73yCMDWObQthNuB0BDLhpo Oct 2 19:55:24.364000 audit[1892]: CRED_ACQ pid=1892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:24.364000 audit[1892]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1b9e91d0 a2=3 a3=0 items=0 ppid=1 pid=1892 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:24.364000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:55:24.366582 sshd[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:24.373559 systemd[1]: Started session-7.scope. Oct 2 19:55:24.374039 systemd-logind[1627]: New session 7 of user core. Oct 2 19:55:24.379000 audit[1892]: USER_START pid=1892 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:24.383000 audit[1894]: CRED_ACQ pid=1894 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:24.482000 audit[1895]: USER_ACCT pid=1895 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.482000 audit[1895]: CRED_REFR pid=1895 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:24.483670 sudo[1895]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:55:24.483965 sudo[1895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:55:24.487000 audit[1895]: USER_START pid=1895 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.111890 systemd[1]: Reloading. Oct 2 19:55:25.254653 /usr/lib/systemd/system-generators/torcx-generator[1925]: time="2023-10-02T19:55:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:55:25.254694 /usr/lib/systemd/system-generators/torcx-generator[1925]: time="2023-10-02T19:55:25Z" level=info msg="torcx already run" Oct 2 19:55:25.354621 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:55:25.354644 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:55:25.378519 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit: BPF prog-id=40 op=LOAD Oct 2 19:55:25.458000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.458000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit: BPF prog-id=41 op=LOAD Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit: BPF prog-id=42 op=LOAD Oct 2 19:55:25.459000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:55:25.459000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.459000 audit: BPF prog-id=43 op=LOAD Oct 2 19:55:25.459000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit: BPF prog-id=44 op=LOAD Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.460000 audit: BPF prog-id=45 op=LOAD Oct 2 19:55:25.460000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:55:25.460000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:55:25.461000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.461000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.461000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.461000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.462000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.462000 audit: BPF prog-id=46 op=LOAD Oct 2 19:55:25.462000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.463000 audit: BPF prog-id=47 op=LOAD Oct 2 19:55:25.463000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.464000 audit: BPF prog-id=48 op=LOAD Oct 2 19:55:25.464000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit: BPF prog-id=49 op=LOAD Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.466000 audit: BPF prog-id=50 op=LOAD Oct 2 19:55:25.466000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:55:25.466000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.467000 audit: BPF prog-id=51 op=LOAD Oct 2 19:55:25.467000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.471000 audit: BPF prog-id=52 op=LOAD Oct 2 19:55:25.471000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit: BPF prog-id=53 op=LOAD Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.472000 audit: BPF prog-id=54 op=LOAD Oct 2 19:55:25.472000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:55:25.472000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit: BPF prog-id=55 op=LOAD Oct 2 19:55:25.473000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.473000 audit: BPF prog-id=56 op=LOAD Oct 2 19:55:25.473000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:25.474000 audit: BPF prog-id=57 op=LOAD Oct 2 19:55:25.474000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:55:25.474000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:55:25.490538 systemd[1]: Started kubelet.service. Oct 2 19:55:25.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.510919 systemd[1]: Starting coreos-metadata.service... Oct 2 19:55:25.593204 kubelet[1976]: E1002 19:55:25.593039 1976 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:55:25.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:55:25.595907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:55:25.596097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:55:25.637088 coreos-metadata[1983]: Oct 02 19:55:25.636 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:55:25.638652 coreos-metadata[1983]: Oct 02 19:55:25.638 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:55:25.639778 coreos-metadata[1983]: Oct 02 19:55:25.639 INFO Fetch successful Oct 2 19:55:25.640035 coreos-metadata[1983]: Oct 02 19:55:25.639 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:55:25.641571 coreos-metadata[1983]: Oct 02 19:55:25.641 INFO Fetch successful Oct 2 19:55:25.641757 coreos-metadata[1983]: Oct 02 19:55:25.641 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:55:25.644013 coreos-metadata[1983]: Oct 02 19:55:25.643 INFO Fetch successful Oct 2 19:55:25.644179 coreos-metadata[1983]: Oct 02 19:55:25.644 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:55:25.648807 coreos-metadata[1983]: Oct 02 19:55:25.648 INFO Fetch successful Oct 2 19:55:25.648959 coreos-metadata[1983]: Oct 02 19:55:25.648 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:55:25.649449 coreos-metadata[1983]: Oct 02 19:55:25.649 INFO Fetch successful Oct 2 19:55:25.649537 coreos-metadata[1983]: Oct 02 19:55:25.649 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:55:25.649970 coreos-metadata[1983]: Oct 02 19:55:25.649 INFO Fetch successful Oct 2 19:55:25.650043 coreos-metadata[1983]: Oct 02 19:55:25.649 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:55:25.652011 coreos-metadata[1983]: Oct 02 19:55:25.651 INFO Fetch successful Oct 2 19:55:25.652097 coreos-metadata[1983]: Oct 02 19:55:25.652 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:55:25.652655 coreos-metadata[1983]: Oct 02 19:55:25.652 INFO Fetch successful Oct 2 19:55:25.666055 systemd[1]: Finished coreos-metadata.service. Oct 2 19:55:25.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:26.177268 systemd[1]: Stopped kubelet.service. Oct 2 19:55:26.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:26.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:26.209571 systemd[1]: Reloading. Oct 2 19:55:26.361697 /usr/lib/systemd/system-generators/torcx-generator[2040]: time="2023-10-02T19:55:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:55:26.367055 /usr/lib/systemd/system-generators/torcx-generator[2040]: time="2023-10-02T19:55:26Z" level=info msg="torcx already run" Oct 2 19:55:26.489783 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:55:26.489808 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:55:26.516419 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:55:26.605000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.605000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.605000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.605000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.605000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.605000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.605000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.605000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.605000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.606000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.606000 audit: BPF prog-id=58 op=LOAD Oct 2 19:55:26.607000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit: BPF prog-id=59 op=LOAD Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit: BPF prog-id=60 op=LOAD Oct 2 19:55:26.607000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:55:26.607000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit: BPF prog-id=61 op=LOAD Oct 2 19:55:26.608000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit: BPF prog-id=62 op=LOAD Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.608000 audit: BPF prog-id=63 op=LOAD Oct 2 19:55:26.608000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:55:26.608000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.610000 audit: BPF prog-id=64 op=LOAD Oct 2 19:55:26.610000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.611000 audit: BPF prog-id=65 op=LOAD Oct 2 19:55:26.612000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.612000 audit: BPF prog-id=66 op=LOAD Oct 2 19:55:26.612000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit: BPF prog-id=67 op=LOAD Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit: BPF prog-id=68 op=LOAD Oct 2 19:55:26.615000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:55:26.615000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.616000 audit: BPF prog-id=69 op=LOAD Oct 2 19:55:26.616000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.622000 audit: BPF prog-id=70 op=LOAD Oct 2 19:55:26.622000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit: BPF prog-id=71 op=LOAD Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.623000 audit: BPF prog-id=72 op=LOAD Oct 2 19:55:26.623000 audit: BPF prog-id=53 op=UNLOAD Oct 2 19:55:26.623000 audit: BPF prog-id=54 op=UNLOAD Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit: BPF prog-id=73 op=LOAD Oct 2 19:55:26.624000 audit: BPF prog-id=55 op=UNLOAD Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit: BPF prog-id=74 op=LOAD Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:26.624000 audit: BPF prog-id=75 op=LOAD Oct 2 19:55:26.624000 audit: BPF prog-id=56 op=UNLOAD Oct 2 19:55:26.624000 audit: BPF prog-id=57 op=UNLOAD Oct 2 19:55:26.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:26.646498 systemd[1]: Started kubelet.service. Oct 2 19:55:26.702967 kubelet[2093]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:55:26.702967 kubelet[2093]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:55:26.702967 kubelet[2093]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:55:26.703427 kubelet[2093]: I1002 19:55:26.703032 2093 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:55:26.704824 kubelet[2093]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:55:26.704824 kubelet[2093]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:55:26.704824 kubelet[2093]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:55:27.133379 kubelet[2093]: I1002 19:55:27.133346 2093 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:55:27.133533 kubelet[2093]: I1002 19:55:27.133522 2093 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:55:27.133910 kubelet[2093]: I1002 19:55:27.133860 2093 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:55:27.136554 kubelet[2093]: I1002 19:55:27.136489 2093 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:55:27.139672 kubelet[2093]: I1002 19:55:27.139648 2093 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:55:27.139972 kubelet[2093]: I1002 19:55:27.139955 2093 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:55:27.140059 kubelet[2093]: I1002 19:55:27.140044 2093 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:55:27.140215 kubelet[2093]: I1002 19:55:27.140068 2093 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:55:27.140215 kubelet[2093]: I1002 19:55:27.140108 2093 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:55:27.140298 kubelet[2093]: I1002 19:55:27.140234 2093 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:55:27.144194 kubelet[2093]: I1002 19:55:27.144170 2093 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:55:27.144194 kubelet[2093]: I1002 19:55:27.144196 2093 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:55:27.144337 kubelet[2093]: I1002 19:55:27.144217 2093 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:55:27.144337 kubelet[2093]: I1002 19:55:27.144229 2093 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:55:27.145210 kubelet[2093]: E1002 19:55:27.145187 2093 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:27.145442 kubelet[2093]: E1002 19:55:27.145246 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:27.156814 kubelet[2093]: I1002 19:55:27.156776 2093 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:55:27.157444 kubelet[2093]: W1002 19:55:27.157413 2093 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:55:27.158281 kubelet[2093]: I1002 19:55:27.158260 2093 server.go:1175] "Started kubelet" Oct 2 19:55:27.161210 kubelet[2093]: W1002 19:55:27.161170 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:27.161373 kubelet[2093]: E1002 19:55:27.161352 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:27.161657 kubelet[2093]: E1002 19:55:27.161638 2093 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:55:27.161744 kubelet[2093]: E1002 19:55:27.161663 2093 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:55:27.162147 kubelet[2093]: E1002 19:55:27.161742 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a2197dbf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 158230463, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 158230463, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.162399 kubelet[2093]: W1002 19:55:27.162378 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.18.171" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:27.162578 kubelet[2093]: E1002 19:55:27.162404 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.171" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:27.162864 kubelet[2093]: I1002 19:55:27.162654 2093 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:55:27.163658 kubelet[2093]: I1002 19:55:27.163631 2093 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:55:27.164000 audit[2093]: AVC avc: denied { mac_admin } for pid=2093 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:27.164000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:55:27.164000 audit[2093]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d03470 a1=c000597968 a2=c000d03440 a3=25 items=0 ppid=1 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.164000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:55:27.164000 audit[2093]: AVC avc: denied { mac_admin } for pid=2093 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:27.164000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:55:27.164000 audit[2093]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d64140 a1=c000597980 a2=c000d03500 a3=25 items=0 ppid=1 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.164000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:55:27.166175 kubelet[2093]: I1002 19:55:27.165533 2093 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:55:27.166175 kubelet[2093]: I1002 19:55:27.165696 2093 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:55:27.166175 kubelet[2093]: I1002 19:55:27.165780 2093 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:55:27.167415 kubelet[2093]: E1002 19:55:27.167333 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a24db41e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 161652254, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 161652254, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.171713 kubelet[2093]: I1002 19:55:27.171151 2093 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:55:27.171713 kubelet[2093]: I1002 19:55:27.171224 2093 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:55:27.172848 kubelet[2093]: W1002 19:55:27.172826 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:27.173049 kubelet[2093]: E1002 19:55:27.172854 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:27.173129 kubelet[2093]: E1002 19:55:27.173107 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:27.175095 kubelet[2093]: E1002 19:55:27.173743 2093 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.18.171" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:27.247000 audit[2109]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.247000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdba1adcc0 a2=0 a3=7ffdba1adcac items=0 ppid=2093 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:55:27.249000 audit[2113]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.249000 audit[2113]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe4d967940 a2=0 a3=7ffe4d96792c items=0 ppid=2093 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:55:27.251689 kubelet[2093]: I1002 19:55:27.251668 2093 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:55:27.251890 kubelet[2093]: I1002 19:55:27.251875 2093 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:55:27.251978 kubelet[2093]: I1002 19:55:27.251969 2093 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:55:27.257252 kubelet[2093]: I1002 19:55:27.257231 2093 policy_none.go:49] "None policy: Start" Oct 2 19:55:27.257533 kubelet[2093]: E1002 19:55:27.257462 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a79246d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.171 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250032342, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250032342, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.258485 kubelet[2093]: I1002 19:55:27.258472 2093 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:55:27.258574 kubelet[2093]: I1002 19:55:27.258567 2093 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:55:27.263000 kubelet[2093]: E1002 19:55:27.262912 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7926a7f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.171 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250041471, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250041471, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.264434 kubelet[2093]: E1002 19:55:27.264377 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7927cc6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.171 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250046150, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250046150, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.267648 systemd[1]: Created slice kubepods.slice. Oct 2 19:55:27.252000 audit[2115]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.252000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd678c9eb0 a2=0 a3=7ffd678c9e9c items=0 ppid=2093 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:55:27.271540 kubelet[2093]: E1002 19:55:27.271315 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:27.272352 kubelet[2093]: I1002 19:55:27.272333 2093 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.171" Oct 2 19:55:27.275000 audit[2120]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.275000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffff930ba80 a2=0 a3=7ffff930ba6c items=0 ppid=2093 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:55:27.278721 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:55:27.279017 kubelet[2093]: E1002 19:55:27.278834 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a79246d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.171 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250032342, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 272279062, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a79246d6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.279178 kubelet[2093]: E1002 19:55:27.279146 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.171" Oct 2 19:55:27.280695 kubelet[2093]: E1002 19:55:27.280520 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7926a7f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.171 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250041471, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 272287447, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7926a7f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.284061 kubelet[2093]: E1002 19:55:27.283732 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7927cc6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.171 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250046150, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 272293186, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7927cc6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.283888 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:55:27.290916 kubelet[2093]: I1002 19:55:27.290885 2093 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:55:27.291054 kubelet[2093]: I1002 19:55:27.290956 2093 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:55:27.289000 audit[2093]: AVC avc: denied { mac_admin } for pid=2093 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:27.289000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:55:27.289000 audit[2093]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c6d380 a1=c0008b2258 a2=c000c6d350 a3=25 items=0 ppid=1 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.289000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:55:27.291381 kubelet[2093]: I1002 19:55:27.291210 2093 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:55:27.293899 kubelet[2093]: E1002 19:55:27.293680 2093 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.171\" not found" Oct 2 19:55:27.295051 kubelet[2093]: E1002 19:55:27.294570 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283aa23e84b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 293130827, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 293130827, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.334000 audit[2125]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.334000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe2689ecb0 a2=0 a3=7ffe2689ec9c items=0 ppid=2093 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:55:27.335000 audit[2126]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.335000 audit[2126]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc1ab61c00 a2=0 a3=7ffc1ab61bec items=0 ppid=2093 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:55:27.341000 audit[2129]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.341000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff8d6c6bc0 a2=0 a3=7fff8d6c6bac items=0 ppid=2093 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:55:27.346000 audit[2132]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.346000 audit[2132]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff305a33a0 a2=0 a3=7fff305a338c items=0 ppid=2093 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:55:27.348000 audit[2133]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.348000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4c076460 a2=0 a3=7ffe4c07644c items=0 ppid=2093 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:55:27.349000 audit[2134]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.349000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa63b5bf0 a2=0 a3=7fffa63b5bdc items=0 ppid=2093 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:55:27.352000 audit[2136]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.352000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff280bc0d0 a2=0 a3=7fff280bc0bc items=0 ppid=2093 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:55:27.371715 kubelet[2093]: E1002 19:55:27.371679 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:27.375605 kubelet[2093]: E1002 19:55:27.375578 2093 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.18.171" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:27.355000 audit[2138]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.355000 audit[2138]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdce782870 a2=0 a3=7ffdce78285c items=0 ppid=2093 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:55:27.383000 audit[2141]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.383000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fff7ce290a0 a2=0 a3=7fff7ce2908c items=0 ppid=2093 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.383000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:55:27.388000 audit[2143]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.388000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd36f47ef0 a2=0 a3=7ffd36f47edc items=0 ppid=2093 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:55:27.398000 audit[2146]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.398000 audit[2146]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffcccdb6430 a2=0 a3=7ffcccdb641c items=0 ppid=2093 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:55:27.400435 kubelet[2093]: I1002 19:55:27.400411 2093 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:55:27.400000 audit[2147]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=2147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.400000 audit[2147]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffccd261b50 a2=0 a3=7ffccd261b3c items=0 ppid=2093 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.400000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:55:27.402000 audit[2148]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=2148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.402000 audit[2148]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde986ea20 a2=0 a3=10e3 items=0 ppid=2093 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:55:27.402000 audit[2149]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=2149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.402000 audit[2149]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe57614250 a2=0 a3=7ffe5761423c items=0 ppid=2093 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:55:27.403000 audit[2150]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.403000 audit[2150]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8d2199b0 a2=0 a3=7fff8d21999c items=0 ppid=2093 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:55:27.405000 audit[2152]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:27.405000 audit[2152]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0c4ca5b0 a2=0 a3=7ffc0c4ca59c items=0 ppid=2093 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:55:27.405000 audit[2153]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.405000 audit[2153]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe9236c0b0 a2=0 a3=7ffe9236c09c items=0 ppid=2093 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:55:27.407000 audit[2154]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.407000 audit[2154]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fffdb98a440 a2=0 a3=7fffdb98a42c items=0 ppid=2093 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:55:27.411000 audit[2156]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.411000 audit[2156]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff41a2c640 a2=0 a3=7fff41a2c62c items=0 ppid=2093 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:55:27.412000 audit[2157]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.412000 audit[2157]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd998702b0 a2=0 a3=7ffd9987029c items=0 ppid=2093 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:55:27.414000 audit[2158]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.414000 audit[2158]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffef67ad90 a2=0 a3=7fffef67ad7c items=0 ppid=2093 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:55:27.416000 audit[2160]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.416000 audit[2160]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc91462c00 a2=0 a3=7ffc91462bec items=0 ppid=2093 pid=2160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:55:27.421000 audit[2162]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.421000 audit[2162]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff5d2b0ac0 a2=0 a3=7fff5d2b0aac items=0 ppid=2093 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:55:27.422000 audit[2164]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.422000 audit[2164]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc60bebff0 a2=0 a3=7ffc60bebfdc items=0 ppid=2093 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:55:27.425000 audit[2166]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.425000 audit[2166]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe839f1700 a2=0 a3=7ffe839f16ec items=0 ppid=2093 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:55:27.429000 audit[2168]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.429000 audit[2168]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffdb728cc00 a2=0 a3=7ffdb728cbec items=0 ppid=2093 pid=2168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:55:27.430691 kubelet[2093]: I1002 19:55:27.430667 2093 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:55:27.430794 kubelet[2093]: I1002 19:55:27.430697 2093 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:55:27.430794 kubelet[2093]: I1002 19:55:27.430721 2093 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:55:27.430794 kubelet[2093]: E1002 19:55:27.430778 2093 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:55:27.431000 audit[2169]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.431000 audit[2169]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc242a9f0 a2=0 a3=7fffc242a9dc items=0 ppid=2093 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:55:27.432000 audit[2170]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.432000 audit[2170]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8c4973a0 a2=0 a3=7ffd8c49738c items=0 ppid=2093 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.432000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:55:27.434173 kubelet[2093]: W1002 19:55:27.434151 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:27.434260 kubelet[2093]: E1002 19:55:27.434184 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:27.433000 audit[2171]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:27.433000 audit[2171]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb9730650 a2=0 a3=7ffdb973063c items=0 ppid=2093 pid=2171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:27.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:55:27.472614 kubelet[2093]: E1002 19:55:27.472570 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:27.480485 kubelet[2093]: I1002 19:55:27.480460 2093 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.171" Oct 2 19:55:27.482137 kubelet[2093]: E1002 19:55:27.482115 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.171" Oct 2 19:55:27.482337 kubelet[2093]: E1002 19:55:27.482045 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a79246d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.171 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250032342, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 480412013, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a79246d6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.483834 kubelet[2093]: E1002 19:55:27.483759 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7926a7f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.171 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250041471, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 480426791, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7926a7f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.562880 kubelet[2093]: E1002 19:55:27.562775 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7927cc6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.171 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250046150, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 480431509, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7927cc6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.572860 kubelet[2093]: E1002 19:55:27.572752 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:27.673479 kubelet[2093]: E1002 19:55:27.673337 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:27.773935 kubelet[2093]: E1002 19:55:27.773883 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:27.777419 kubelet[2093]: E1002 19:55:27.777389 2093 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.18.171" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:27.874816 kubelet[2093]: E1002 19:55:27.874768 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:27.883822 kubelet[2093]: I1002 19:55:27.883793 2093 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.171" Oct 2 19:55:27.885428 kubelet[2093]: E1002 19:55:27.885407 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.171" Oct 2 19:55:27.885668 kubelet[2093]: E1002 19:55:27.885353 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a79246d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.171 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250032342, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 883755394, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a79246d6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.962200 kubelet[2093]: E1002 19:55:27.961923 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7926a7f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.171 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250041471, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 883760425, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7926a7f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:27.975308 kubelet[2093]: E1002 19:55:27.975262 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.075775 kubelet[2093]: E1002 19:55:28.075726 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.146284 kubelet[2093]: E1002 19:55:28.146229 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:28.164126 kubelet[2093]: E1002 19:55:28.163326 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7927cc6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.171 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250046150, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 883763382, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7927cc6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:28.175938 kubelet[2093]: E1002 19:55:28.175889 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.271708 kubelet[2093]: W1002 19:55:28.271605 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:28.271708 kubelet[2093]: E1002 19:55:28.271641 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:28.276768 kubelet[2093]: E1002 19:55:28.276728 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.288137 kubelet[2093]: W1002 19:55:28.288104 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.18.171" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:28.288137 kubelet[2093]: E1002 19:55:28.288140 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.171" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:28.377805 kubelet[2093]: E1002 19:55:28.377756 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.478399 kubelet[2093]: E1002 19:55:28.478356 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.568022 kubelet[2093]: W1002 19:55:28.567920 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:28.568022 kubelet[2093]: E1002 19:55:28.567959 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:28.579245 kubelet[2093]: E1002 19:55:28.579199 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.579575 kubelet[2093]: E1002 19:55:28.579544 2093 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.18.171" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:28.638049 kubelet[2093]: W1002 19:55:28.638003 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:28.638049 kubelet[2093]: E1002 19:55:28.638042 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:28.680340 kubelet[2093]: E1002 19:55:28.680300 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.687232 kubelet[2093]: I1002 19:55:28.687200 2093 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.171" Oct 2 19:55:28.689226 kubelet[2093]: E1002 19:55:28.689201 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.171" Oct 2 19:55:28.691234 kubelet[2093]: E1002 19:55:28.689214 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a79246d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.171 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250032342, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 28, 687144620, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a79246d6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:28.696493 kubelet[2093]: E1002 19:55:28.696398 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7926a7f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.171 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250041471, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 28, 687155742, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7926a7f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:28.762440 kubelet[2093]: E1002 19:55:28.762337 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7927cc6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.171 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250046150, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 28, 687161457, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7927cc6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:28.780849 kubelet[2093]: E1002 19:55:28.780808 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.881606 kubelet[2093]: E1002 19:55:28.881490 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:28.982199 kubelet[2093]: E1002 19:55:28.982053 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.082651 kubelet[2093]: E1002 19:55:29.082610 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.147159 kubelet[2093]: E1002 19:55:29.147035 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:29.183177 kubelet[2093]: E1002 19:55:29.183129 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.283866 kubelet[2093]: E1002 19:55:29.283822 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.384804 kubelet[2093]: E1002 19:55:29.384683 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.485559 kubelet[2093]: E1002 19:55:29.485454 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.585991 kubelet[2093]: E1002 19:55:29.585948 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.686634 kubelet[2093]: E1002 19:55:29.686596 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.787229 kubelet[2093]: E1002 19:55:29.787122 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.887778 kubelet[2093]: E1002 19:55:29.887725 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:29.925464 kubelet[2093]: W1002 19:55:29.925424 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.18.171" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:29.925464 kubelet[2093]: E1002 19:55:29.925467 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.171" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:29.988856 kubelet[2093]: E1002 19:55:29.988812 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.089700 kubelet[2093]: E1002 19:55:30.089546 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.148141 kubelet[2093]: E1002 19:55:30.148090 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:30.181930 kubelet[2093]: E1002 19:55:30.181886 2093 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.18.171" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:30.190221 kubelet[2093]: E1002 19:55:30.190180 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.290292 kubelet[2093]: E1002 19:55:30.290265 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.290458 kubelet[2093]: I1002 19:55:30.290375 2093 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.171" Oct 2 19:55:30.292973 kubelet[2093]: E1002 19:55:30.292952 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.171" Oct 2 19:55:30.293128 kubelet[2093]: E1002 19:55:30.292943 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a79246d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.171 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250032342, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 290338348, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a79246d6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.296559 kubelet[2093]: E1002 19:55:30.296452 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7926a7f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.171 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250041471, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 290346308, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7926a7f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.299146 kubelet[2093]: E1002 19:55:30.298808 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7927cc6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.171 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250046150, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 290350285, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7927cc6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.391668 kubelet[2093]: E1002 19:55:30.391550 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.492455 kubelet[2093]: E1002 19:55:30.492413 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.593185 kubelet[2093]: E1002 19:55:30.593141 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.669293 kubelet[2093]: W1002 19:55:30.669115 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:30.669293 kubelet[2093]: E1002 19:55:30.669154 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:30.693702 kubelet[2093]: E1002 19:55:30.693620 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.794206 kubelet[2093]: E1002 19:55:30.794164 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.894943 kubelet[2093]: E1002 19:55:30.894896 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:30.995621 kubelet[2093]: E1002 19:55:30.995499 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:31.096436 kubelet[2093]: E1002 19:55:31.096390 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:31.148840 kubelet[2093]: E1002 19:55:31.148794 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:31.196995 kubelet[2093]: E1002 19:55:31.196947 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:31.297845 kubelet[2093]: E1002 19:55:31.297742 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:31.377663 kubelet[2093]: W1002 19:55:31.377627 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:31.377663 kubelet[2093]: E1002 19:55:31.377665 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:31.397904 kubelet[2093]: E1002 19:55:31.397859 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:31.498669 kubelet[2093]: E1002 19:55:31.498625 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:31.585524 kubelet[2093]: W1002 19:55:31.585416 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:31.585524 kubelet[2093]: E1002 19:55:31.585455 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:31.599358 kubelet[2093]: E1002 19:55:31.599256 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:31.700086 kubelet[2093]: E1002 19:55:31.700034 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:31.800707 kubelet[2093]: E1002 19:55:31.800665 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:31.901488 kubelet[2093]: E1002 19:55:31.901376 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.002021 kubelet[2093]: E1002 19:55:32.001976 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.102634 kubelet[2093]: E1002 19:55:32.102588 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.148961 kubelet[2093]: E1002 19:55:32.148911 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:32.203621 kubelet[2093]: E1002 19:55:32.203511 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.292547 kubelet[2093]: E1002 19:55:32.292518 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:32.304099 kubelet[2093]: E1002 19:55:32.304047 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.404651 kubelet[2093]: E1002 19:55:32.404607 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.505640 kubelet[2093]: E1002 19:55:32.505535 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.606089 kubelet[2093]: E1002 19:55:32.606049 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.706912 kubelet[2093]: E1002 19:55:32.706865 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.807551 kubelet[2093]: E1002 19:55:32.807439 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:32.908070 kubelet[2093]: E1002 19:55:32.908029 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.008695 kubelet[2093]: E1002 19:55:33.008653 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.109226 kubelet[2093]: E1002 19:55:33.109114 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.144939 kubelet[2093]: W1002 19:55:33.144902 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.18.171" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:33.144939 kubelet[2093]: E1002 19:55:33.144940 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.171" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:33.150069 kubelet[2093]: E1002 19:55:33.150034 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:33.210009 kubelet[2093]: E1002 19:55:33.209969 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.310701 kubelet[2093]: E1002 19:55:33.310658 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.384264 kubelet[2093]: E1002 19:55:33.384158 2093 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.18.171" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:33.411355 kubelet[2093]: E1002 19:55:33.411304 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.494710 kubelet[2093]: I1002 19:55:33.494669 2093 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.171" Oct 2 19:55:33.496408 kubelet[2093]: E1002 19:55:33.496380 2093 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.18.171" Oct 2 19:55:33.496543 kubelet[2093]: E1002 19:55:33.496374 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a79246d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.18.171 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250032342, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 33, 494621568, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a79246d6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:33.497838 kubelet[2093]: E1002 19:55:33.497764 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7926a7f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.18.171 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250041471, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 33, 494633461, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7926a7f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:33.499277 kubelet[2093]: E1002 19:55:33.499141 2093 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.171.178a6283a7927cc6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.18.171", UID:"172.31.18.171", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.18.171 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.18.171"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 27, 250046150, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 33, 494637667, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.18.171.178a6283a7927cc6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:33.512488 kubelet[2093]: E1002 19:55:33.512430 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.613135 kubelet[2093]: E1002 19:55:33.613098 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.713929 kubelet[2093]: E1002 19:55:33.713822 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.814538 kubelet[2093]: E1002 19:55:33.814494 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:33.915200 kubelet[2093]: E1002 19:55:33.915154 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.015876 kubelet[2093]: E1002 19:55:34.015755 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.116821 kubelet[2093]: E1002 19:55:34.116771 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.150291 kubelet[2093]: E1002 19:55:34.150244 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:34.217401 kubelet[2093]: E1002 19:55:34.217360 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.318322 kubelet[2093]: E1002 19:55:34.318213 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.418799 kubelet[2093]: E1002 19:55:34.418756 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.519513 kubelet[2093]: E1002 19:55:34.519471 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.620293 kubelet[2093]: E1002 19:55:34.620129 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.626920 kubelet[2093]: W1002 19:55:34.626885 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:34.626920 kubelet[2093]: E1002 19:55:34.626921 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:34.720623 kubelet[2093]: E1002 19:55:34.720567 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.821082 kubelet[2093]: E1002 19:55:34.821028 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:34.921584 kubelet[2093]: E1002 19:55:34.921468 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.022005 kubelet[2093]: E1002 19:55:35.021952 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.122431 kubelet[2093]: E1002 19:55:35.122385 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.150936 kubelet[2093]: E1002 19:55:35.150881 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:35.223234 kubelet[2093]: E1002 19:55:35.223129 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.323993 kubelet[2093]: E1002 19:55:35.323944 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.424519 kubelet[2093]: E1002 19:55:35.424381 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.524789 kubelet[2093]: E1002 19:55:35.524671 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.625253 kubelet[2093]: E1002 19:55:35.625155 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.725900 kubelet[2093]: E1002 19:55:35.725846 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.826380 kubelet[2093]: E1002 19:55:35.826232 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:35.926564 kubelet[2093]: E1002 19:55:35.926509 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.027065 kubelet[2093]: E1002 19:55:36.026941 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.127475 kubelet[2093]: E1002 19:55:36.127365 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.151909 kubelet[2093]: E1002 19:55:36.151872 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:36.228022 kubelet[2093]: E1002 19:55:36.227976 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.328586 kubelet[2093]: E1002 19:55:36.328536 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.429174 kubelet[2093]: E1002 19:55:36.429054 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.529780 kubelet[2093]: E1002 19:55:36.529739 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.630755 kubelet[2093]: E1002 19:55:36.630426 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.685910 kubelet[2093]: W1002 19:55:36.685785 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:36.685910 kubelet[2093]: E1002 19:55:36.685820 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:36.731437 kubelet[2093]: E1002 19:55:36.731386 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.774154 kubelet[2093]: W1002 19:55:36.774119 2093 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:36.774154 kubelet[2093]: E1002 19:55:36.774154 2093 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:36.832333 kubelet[2093]: E1002 19:55:36.832287 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:36.932716 kubelet[2093]: E1002 19:55:36.932675 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.033224 kubelet[2093]: E1002 19:55:37.033121 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.133583 kubelet[2093]: E1002 19:55:37.133538 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.135731 kubelet[2093]: I1002 19:55:37.135691 2093 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:55:37.152169 kubelet[2093]: E1002 19:55:37.152133 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:37.234563 kubelet[2093]: E1002 19:55:37.234516 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.294119 kubelet[2093]: E1002 19:55:37.293757 2093 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.171\" not found" Oct 2 19:55:37.294309 kubelet[2093]: E1002 19:55:37.293789 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:37.335175 kubelet[2093]: E1002 19:55:37.335125 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.435729 kubelet[2093]: E1002 19:55:37.435686 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.520005 kubelet[2093]: E1002 19:55:37.519966 2093 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.18.171" not found Oct 2 19:55:37.536305 kubelet[2093]: E1002 19:55:37.536220 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.637419 kubelet[2093]: E1002 19:55:37.637308 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.737852 kubelet[2093]: E1002 19:55:37.737808 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.837931 kubelet[2093]: E1002 19:55:37.837885 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:37.938388 kubelet[2093]: E1002 19:55:37.938268 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.039225 kubelet[2093]: E1002 19:55:38.039178 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.139712 kubelet[2093]: E1002 19:55:38.139666 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.153049 kubelet[2093]: E1002 19:55:38.153003 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:38.240369 kubelet[2093]: E1002 19:55:38.240249 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.340955 kubelet[2093]: E1002 19:55:38.340915 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.441338 kubelet[2093]: E1002 19:55:38.441300 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.542127 kubelet[2093]: E1002 19:55:38.542066 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.567489 kubelet[2093]: E1002 19:55:38.567362 2093 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.18.171" not found Oct 2 19:55:38.642875 kubelet[2093]: E1002 19:55:38.642831 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.743598 kubelet[2093]: E1002 19:55:38.743550 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.844517 kubelet[2093]: E1002 19:55:38.844412 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:38.944911 kubelet[2093]: E1002 19:55:38.944864 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.045771 kubelet[2093]: E1002 19:55:39.045666 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.146461 kubelet[2093]: E1002 19:55:39.146344 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.153781 kubelet[2093]: E1002 19:55:39.153681 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:39.247199 kubelet[2093]: E1002 19:55:39.247151 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.347901 kubelet[2093]: E1002 19:55:39.347857 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.448646 kubelet[2093]: E1002 19:55:39.448535 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.548993 kubelet[2093]: E1002 19:55:39.548948 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.650039 kubelet[2093]: E1002 19:55:39.649993 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.750515 kubelet[2093]: E1002 19:55:39.750473 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.792202 kubelet[2093]: E1002 19:55:39.792170 2093 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.18.171\" not found" node="172.31.18.171" Oct 2 19:55:39.851349 kubelet[2093]: E1002 19:55:39.851304 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.897642 kubelet[2093]: I1002 19:55:39.897604 2093 kubelet_node_status.go:70] "Attempting to register node" node="172.31.18.171" Oct 2 19:55:39.952203 kubelet[2093]: E1002 19:55:39.952163 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:39.969131 kubelet[2093]: I1002 19:55:39.969095 2093 kubelet_node_status.go:73] "Successfully registered node" node="172.31.18.171" Oct 2 19:55:40.052691 kubelet[2093]: E1002 19:55:40.052569 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:40.153337 kubelet[2093]: E1002 19:55:40.153282 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:40.154404 kubelet[2093]: E1002 19:55:40.154373 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:40.176813 sudo[1895]: pam_unix(sudo:session): session closed for user root Oct 2 19:55:40.176000 audit[1895]: USER_END pid=1895 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:40.178247 kernel: kauditd_printk_skb: 540 callbacks suppressed Oct 2 19:55:40.178300 kernel: audit: type=1106 audit(1696276540.176:641): pid=1895 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:40.183000 audit[1895]: CRED_DISP pid=1895 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:40.194002 kernel: audit: type=1104 audit(1696276540.183:642): pid=1895 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:40.210453 sshd[1892]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:40.212000 audit[1892]: USER_END pid=1892 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:40.226153 kernel: audit: type=1106 audit(1696276540.212:643): pid=1892 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:40.215117 systemd-logind[1627]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:55:40.219433 systemd[1]: sshd@6-172.31.18.171:22-139.178.89.65:37476.service: Deactivated successfully. Oct 2 19:55:40.221235 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:55:40.212000 audit[1892]: CRED_DISP pid=1892 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:40.226062 systemd-logind[1627]: Removed session 7. Oct 2 19:55:40.237124 kernel: audit: type=1104 audit(1696276540.212:644): pid=1892 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:55:40.237228 kernel: audit: type=1131 audit(1696276540.217:645): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.171:22-139.178.89.65:37476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:40.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.18.171:22-139.178.89.65:37476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:40.253598 kubelet[2093]: E1002 19:55:40.253560 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:40.354251 kubelet[2093]: E1002 19:55:40.354146 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:40.454360 kubelet[2093]: E1002 19:55:40.454310 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:40.554820 kubelet[2093]: E1002 19:55:40.554774 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:40.655549 kubelet[2093]: E1002 19:55:40.655435 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:40.755927 kubelet[2093]: E1002 19:55:40.755876 2093 kubelet.go:2448] "Error getting node" err="node \"172.31.18.171\" not found" Oct 2 19:55:40.856546 kubelet[2093]: I1002 19:55:40.856500 2093 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:55:40.857184 env[1633]: time="2023-10-02T19:55:40.857130997Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:55:40.857561 kubelet[2093]: I1002 19:55:40.857530 2093 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:55:40.857903 kubelet[2093]: E1002 19:55:40.857886 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:41.155900 kubelet[2093]: E1002 19:55:41.155861 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:41.156104 kubelet[2093]: I1002 19:55:41.155882 2093 apiserver.go:52] "Watching apiserver" Oct 2 19:55:41.162179 kubelet[2093]: I1002 19:55:41.162140 2093 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:55:41.162333 kubelet[2093]: I1002 19:55:41.162306 2093 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:55:41.178761 systemd[1]: Created slice kubepods-besteffort-pod87997981_8a5a_49c4_ad18_a071d566c053.slice. Oct 2 19:55:41.205599 systemd[1]: Created slice kubepods-burstable-pod6e8ddd69_00be_4891_b491_0a395b851c77.slice. Oct 2 19:55:41.258273 kubelet[2093]: I1002 19:55:41.258234 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-hostproc\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.258502 kubelet[2093]: I1002 19:55:41.258486 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-xtables-lock\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.258745 kubelet[2093]: I1002 19:55:41.258672 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mprk5\" (UniqueName: \"kubernetes.io/projected/6e8ddd69-00be-4891-b491-0a395b851c77-kube-api-access-mprk5\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.258846 kubelet[2093]: I1002 19:55:41.258761 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87997981-8a5a-49c4-ad18-a071d566c053-lib-modules\") pod \"kube-proxy-hvrjj\" (UID: \"87997981-8a5a-49c4-ad18-a071d566c053\") " pod="kube-system/kube-proxy-hvrjj" Oct 2 19:55:41.258846 kubelet[2093]: I1002 19:55:41.258797 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-bpf-maps\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.258846 kubelet[2093]: I1002 19:55:41.258829 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cni-path\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.258976 kubelet[2093]: I1002 19:55:41.258865 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-host-proc-sys-net\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.258976 kubelet[2093]: I1002 19:55:41.258899 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-host-proc-sys-kernel\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.258976 kubelet[2093]: I1002 19:55:41.258936 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87997981-8a5a-49c4-ad18-a071d566c053-xtables-lock\") pod \"kube-proxy-hvrjj\" (UID: \"87997981-8a5a-49c4-ad18-a071d566c053\") " pod="kube-system/kube-proxy-hvrjj" Oct 2 19:55:41.258976 kubelet[2093]: I1002 19:55:41.258971 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-run\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.259176 kubelet[2093]: I1002 19:55:41.259006 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-lib-modules\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.259176 kubelet[2093]: I1002 19:55:41.259044 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e8ddd69-00be-4891-b491-0a395b851c77-clustermesh-secrets\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.259176 kubelet[2093]: I1002 19:55:41.259089 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-config-path\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.259176 kubelet[2093]: I1002 19:55:41.259121 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-cgroup\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.259176 kubelet[2093]: I1002 19:55:41.259162 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-etc-cni-netd\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.259461 kubelet[2093]: I1002 19:55:41.259195 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e8ddd69-00be-4891-b491-0a395b851c77-hubble-tls\") pod \"cilium-7n6mw\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " pod="kube-system/cilium-7n6mw" Oct 2 19:55:41.259461 kubelet[2093]: I1002 19:55:41.259239 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/87997981-8a5a-49c4-ad18-a071d566c053-kube-proxy\") pod \"kube-proxy-hvrjj\" (UID: \"87997981-8a5a-49c4-ad18-a071d566c053\") " pod="kube-system/kube-proxy-hvrjj" Oct 2 19:55:41.259461 kubelet[2093]: I1002 19:55:41.259344 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5gf\" (UniqueName: \"kubernetes.io/projected/87997981-8a5a-49c4-ad18-a071d566c053-kube-api-access-jz5gf\") pod \"kube-proxy-hvrjj\" (UID: \"87997981-8a5a-49c4-ad18-a071d566c053\") " pod="kube-system/kube-proxy-hvrjj" Oct 2 19:55:41.259461 kubelet[2093]: I1002 19:55:41.259366 2093 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:55:42.157068 kubelet[2093]: E1002 19:55:42.157017 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:42.164208 kubelet[2093]: I1002 19:55:42.164169 2093 request.go:690] Waited for 1.001113118s due to client-side throttling, not priority and fairness, request: GET:https://172.31.23.8:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Oct 2 19:55:42.295172 kubelet[2093]: E1002 19:55:42.295138 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:42.719152 env[1633]: time="2023-10-02T19:55:42.719104942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7n6mw,Uid:6e8ddd69-00be-4891-b491-0a395b851c77,Namespace:kube-system,Attempt:0,}" Oct 2 19:55:43.004036 env[1633]: time="2023-10-02T19:55:43.003989490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvrjj,Uid:87997981-8a5a-49c4-ad18-a071d566c053,Namespace:kube-system,Attempt:0,}" Oct 2 19:55:43.133112 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:55:43.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:43.145275 kernel: audit: type=1131 audit(1696276543.132:646): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:43.157460 kubelet[2093]: E1002 19:55:43.157402 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:43.166000 audit: BPF prog-id=60 op=UNLOAD Oct 2 19:55:43.166000 audit: BPF prog-id=59 op=UNLOAD Oct 2 19:55:43.174476 kernel: audit: type=1334 audit(1696276543.166:647): prog-id=60 op=UNLOAD Oct 2 19:55:43.174595 kernel: audit: type=1334 audit(1696276543.166:648): prog-id=59 op=UNLOAD Oct 2 19:55:43.174635 kernel: audit: type=1334 audit(1696276543.166:649): prog-id=58 op=UNLOAD Oct 2 19:55:43.166000 audit: BPF prog-id=58 op=UNLOAD Oct 2 19:55:43.317692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918457521.mount: Deactivated successfully. Oct 2 19:55:43.334385 env[1633]: time="2023-10-02T19:55:43.334242465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:43.335790 env[1633]: time="2023-10-02T19:55:43.335618577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:43.340372 env[1633]: time="2023-10-02T19:55:43.340305764Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:43.345280 env[1633]: time="2023-10-02T19:55:43.345233897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:43.349876 env[1633]: time="2023-10-02T19:55:43.349829299Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:43.363165 env[1633]: time="2023-10-02T19:55:43.361970041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:43.363165 env[1633]: time="2023-10-02T19:55:43.362836799Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:43.368284 env[1633]: time="2023-10-02T19:55:43.368239370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:43.409978 env[1633]: time="2023-10-02T19:55:43.405400528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:55:43.409978 env[1633]: time="2023-10-02T19:55:43.405444511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:55:43.409978 env[1633]: time="2023-10-02T19:55:43.405463310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:55:43.409978 env[1633]: time="2023-10-02T19:55:43.405620990Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b pid=2188 runtime=io.containerd.runc.v2 Oct 2 19:55:43.432185 env[1633]: time="2023-10-02T19:55:43.431045444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:55:43.434846 env[1633]: time="2023-10-02T19:55:43.431438001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:55:43.443751 env[1633]: time="2023-10-02T19:55:43.443412612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:55:43.445172 env[1633]: time="2023-10-02T19:55:43.445051999Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/521ea84fb461a60cacef5c974469313d78bc21c4487e96c07ab963d005d59b01 pid=2208 runtime=io.containerd.runc.v2 Oct 2 19:55:43.459949 systemd[1]: Started cri-containerd-2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b.scope. Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.496097 kernel: audit: type=1400 audit(1696276543.487:650): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.487000 audit: BPF prog-id=76 op=LOAD Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2188 pid=2209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264333036623266636436313565623836646163626662346133383163 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=2188 pid=2209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264333036623266636436313565623836646163626662346133383163 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.488000 audit: BPF prog-id=77 op=LOAD Oct 2 19:55:43.488000 audit[2209]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c000185ac0 items=0 ppid=2188 pid=2209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264333036623266636436313565623836646163626662346133383163 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit: BPF prog-id=78 op=LOAD Oct 2 19:55:43.495000 audit[2209]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c000185b08 items=0 ppid=2188 pid=2209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264333036623266636436313565623836646163626662346133383163 Oct 2 19:55:43.495000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:55:43.495000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { perfmon } for pid=2209 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit[2209]: AVC avc: denied { bpf } for pid=2209 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.495000 audit: BPF prog-id=79 op=LOAD Oct 2 19:55:43.495000 audit[2209]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c000185f18 items=0 ppid=2188 pid=2209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264333036623266636436313565623836646163626662346133383163 Oct 2 19:55:43.497637 systemd[1]: Started cri-containerd-521ea84fb461a60cacef5c974469313d78bc21c4487e96c07ab963d005d59b01.scope. Oct 2 19:55:43.535226 env[1633]: time="2023-10-02T19:55:43.535165476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7n6mw,Uid:6e8ddd69-00be-4891-b491-0a395b851c77,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\"" Oct 2 19:55:43.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit: BPF prog-id=80 op=LOAD Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2208 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532316561383466623436316136306361636566356339373434363933 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2208 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532316561383466623436316136306361636566356339373434363933 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit: BPF prog-id=81 op=LOAD Oct 2 19:55:43.537000 audit[2223]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c00028a7a0 items=0 ppid=2208 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532316561383466623436316136306361636566356339373434363933 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.537000 audit: BPF prog-id=82 op=LOAD Oct 2 19:55:43.537000 audit[2223]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c00028a7e8 items=0 ppid=2208 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532316561383466623436316136306361636566356339373434363933 Oct 2 19:55:43.538000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:55:43.538000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { perfmon } for pid=2223 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit[2223]: AVC avc: denied { bpf } for pid=2223 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:43.538000 audit: BPF prog-id=83 op=LOAD Oct 2 19:55:43.538000 audit[2223]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c00028abf8 items=0 ppid=2208 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:43.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532316561383466623436316136306361636566356339373434363933 Oct 2 19:55:43.542994 env[1633]: time="2023-10-02T19:55:43.542957997Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:55:43.556189 env[1633]: time="2023-10-02T19:55:43.556142827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvrjj,Uid:87997981-8a5a-49c4-ad18-a071d566c053,Namespace:kube-system,Attempt:0,} returns sandbox id \"521ea84fb461a60cacef5c974469313d78bc21c4487e96c07ab963d005d59b01\"" Oct 2 19:55:44.158596 kubelet[2093]: E1002 19:55:44.158539 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:45.159588 kubelet[2093]: E1002 19:55:45.159472 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:46.160104 kubelet[2093]: E1002 19:55:46.160054 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:47.144561 kubelet[2093]: E1002 19:55:47.144502 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:47.160485 kubelet[2093]: E1002 19:55:47.160406 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:47.310493 kubelet[2093]: E1002 19:55:47.310443 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:48.161196 kubelet[2093]: E1002 19:55:48.161141 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:49.162195 kubelet[2093]: E1002 19:55:49.162121 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:50.163256 kubelet[2093]: E1002 19:55:50.163152 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:50.349164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858796706.mount: Deactivated successfully. Oct 2 19:55:51.163662 kubelet[2093]: E1002 19:55:51.163600 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:52.164838 kubelet[2093]: E1002 19:55:52.164759 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:52.311419 kubelet[2093]: E1002 19:55:52.311389 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:53.165678 kubelet[2093]: E1002 19:55:53.165634 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:54.033417 env[1633]: time="2023-10-02T19:55:54.033357622Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:54.035814 env[1633]: time="2023-10-02T19:55:54.035769455Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:54.038205 env[1633]: time="2023-10-02T19:55:54.038164214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:54.039031 env[1633]: time="2023-10-02T19:55:54.038988185Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 19:55:54.041236 env[1633]: time="2023-10-02T19:55:54.041177109Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:55:54.049235 env[1633]: time="2023-10-02T19:55:54.049179704Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:55:54.071295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2921702189.mount: Deactivated successfully. Oct 2 19:55:54.080778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187405847.mount: Deactivated successfully. Oct 2 19:55:54.089776 env[1633]: time="2023-10-02T19:55:54.089718846Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363\"" Oct 2 19:55:54.090770 env[1633]: time="2023-10-02T19:55:54.090734761Z" level=info msg="StartContainer for \"aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363\"" Oct 2 19:55:54.118692 systemd[1]: Started cri-containerd-aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363.scope. Oct 2 19:55:54.136956 systemd[1]: cri-containerd-aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363.scope: Deactivated successfully. Oct 2 19:55:54.166549 kubelet[2093]: E1002 19:55:54.166438 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:54.277896 env[1633]: time="2023-10-02T19:55:54.277828531Z" level=info msg="shim disconnected" id=aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363 Oct 2 19:55:54.277896 env[1633]: time="2023-10-02T19:55:54.277897242Z" level=warning msg="cleaning up after shim disconnected" id=aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363 namespace=k8s.io Oct 2 19:55:54.278340 env[1633]: time="2023-10-02T19:55:54.277952638Z" level=info msg="cleaning up dead shim" Oct 2 19:55:54.290200 env[1633]: time="2023-10-02T19:55:54.289144028Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:55:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2291 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:55:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:55:54.290200 env[1633]: time="2023-10-02T19:55:54.289448902Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Oct 2 19:55:54.292583 env[1633]: time="2023-10-02T19:55:54.292527972Z" level=error msg="Failed to pipe stdout of container \"aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363\"" error="reading from a closed fifo" Oct 2 19:55:54.293684 env[1633]: time="2023-10-02T19:55:54.293633705Z" level=error msg="Failed to pipe stderr of container \"aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363\"" error="reading from a closed fifo" Oct 2 19:55:54.295696 env[1633]: time="2023-10-02T19:55:54.295631560Z" level=error msg="StartContainer for \"aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:55:54.296112 kubelet[2093]: E1002 19:55:54.296069 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363" Oct 2 19:55:54.296280 kubelet[2093]: E1002 19:55:54.296260 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:55:54.296280 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:55:54.296280 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:55:54.296280 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mprk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:55:54.296544 kubelet[2093]: E1002 19:55:54.296325 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:55:54.522294 env[1633]: time="2023-10-02T19:55:54.522248070Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:55:54.554005 env[1633]: time="2023-10-02T19:55:54.553189406Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\"" Oct 2 19:55:54.554408 env[1633]: time="2023-10-02T19:55:54.554374552Z" level=info msg="StartContainer for \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\"" Oct 2 19:55:54.608862 systemd[1]: Started cri-containerd-927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd.scope. Oct 2 19:55:54.638278 systemd[1]: cri-containerd-927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd.scope: Deactivated successfully. Oct 2 19:55:54.712481 env[1633]: time="2023-10-02T19:55:54.712385707Z" level=info msg="shim disconnected" id=927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd Oct 2 19:55:54.712817 env[1633]: time="2023-10-02T19:55:54.712765915Z" level=warning msg="cleaning up after shim disconnected" id=927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd namespace=k8s.io Oct 2 19:55:54.712922 env[1633]: time="2023-10-02T19:55:54.712906372Z" level=info msg="cleaning up dead shim" Oct 2 19:55:54.784844 env[1633]: time="2023-10-02T19:55:54.784789146Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:55:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2330 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:55:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:55:54.785278 env[1633]: time="2023-10-02T19:55:54.785153415Z" level=error msg="copy shim log" error="read /proc/self/fd/50: file already closed" Oct 2 19:55:54.785644 env[1633]: time="2023-10-02T19:55:54.785595345Z" level=error msg="Failed to pipe stderr of container \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\"" error="reading from a closed fifo" Oct 2 19:55:54.785780 env[1633]: time="2023-10-02T19:55:54.785698705Z" level=error msg="Failed to pipe stdout of container \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\"" error="reading from a closed fifo" Oct 2 19:55:54.794574 env[1633]: time="2023-10-02T19:55:54.794476591Z" level=error msg="StartContainer for \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:55:54.794999 kubelet[2093]: E1002 19:55:54.794964 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd" Oct 2 19:55:54.795195 kubelet[2093]: E1002 19:55:54.795159 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:55:54.795195 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:55:54.795195 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:55:54.795195 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mprk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:55:54.795539 kubelet[2093]: E1002 19:55:54.795217 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:55:55.075208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363-rootfs.mount: Deactivated successfully. Oct 2 19:55:55.167583 kubelet[2093]: E1002 19:55:55.167320 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:55.424303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275472862.mount: Deactivated successfully. Oct 2 19:55:55.524715 kubelet[2093]: I1002 19:55:55.524684 2093 scope.go:115] "RemoveContainer" containerID="aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363" Oct 2 19:55:55.525776 kubelet[2093]: I1002 19:55:55.525753 2093 scope.go:115] "RemoveContainer" containerID="aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363" Oct 2 19:55:55.527669 env[1633]: time="2023-10-02T19:55:55.527626235Z" level=info msg="RemoveContainer for \"aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363\"" Oct 2 19:55:55.533662 env[1633]: time="2023-10-02T19:55:55.533612811Z" level=info msg="RemoveContainer for \"aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363\" returns successfully" Oct 2 19:55:55.534389 env[1633]: time="2023-10-02T19:55:55.534356835Z" level=info msg="RemoveContainer for \"aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363\"" Oct 2 19:55:55.534523 env[1633]: time="2023-10-02T19:55:55.534432898Z" level=info msg="RemoveContainer for \"aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363\" returns successfully" Oct 2 19:55:55.535146 kubelet[2093]: E1002 19:55:55.535122 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:55:56.118623 env[1633]: time="2023-10-02T19:55:56.118437059Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:56.122016 env[1633]: time="2023-10-02T19:55:56.121980312Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:56.124448 env[1633]: time="2023-10-02T19:55:56.124390540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:56.126850 env[1633]: time="2023-10-02T19:55:56.126620491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:56.127796 env[1633]: time="2023-10-02T19:55:56.127593400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 19:55:56.130269 env[1633]: time="2023-10-02T19:55:56.130234444Z" level=info msg="CreateContainer within sandbox \"521ea84fb461a60cacef5c974469313d78bc21c4487e96c07ab963d005d59b01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:55:56.152900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73450718.mount: Deactivated successfully. Oct 2 19:55:56.166661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2668720589.mount: Deactivated successfully. Oct 2 19:55:56.169273 kubelet[2093]: E1002 19:55:56.169242 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:56.177173 env[1633]: time="2023-10-02T19:55:56.177108966Z" level=info msg="CreateContainer within sandbox \"521ea84fb461a60cacef5c974469313d78bc21c4487e96c07ab963d005d59b01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d551b5a066f3012a3447d3462450de88d031d32238db5e44ef74ccdf1a9b945c\"" Oct 2 19:55:56.177988 env[1633]: time="2023-10-02T19:55:56.177868065Z" level=info msg="StartContainer for \"d551b5a066f3012a3447d3462450de88d031d32238db5e44ef74ccdf1a9b945c\"" Oct 2 19:55:56.225623 systemd[1]: Started cri-containerd-d551b5a066f3012a3447d3462450de88d031d32238db5e44ef74ccdf1a9b945c.scope. Oct 2 19:55:56.266199 kernel: kauditd_printk_skb: 113 callbacks suppressed Oct 2 19:55:56.266345 kernel: audit: type=1400 audit(1696276556.259:686): avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2208 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.273463 kernel: audit: type=1300 audit(1696276556.259:686): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2208 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435353162356130363666333031326133343437643334363234353064 Oct 2 19:55:56.280252 kernel: audit: type=1327 audit(1696276556.259:686): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435353162356130363666333031326133343437643334363234353064 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.292704 kernel: audit: type=1400 audit(1696276556.259:687): avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.292810 kernel: audit: type=1400 audit(1696276556.259:687): avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.309131 kernel: audit: type=1400 audit(1696276556.259:687): avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.309245 kernel: audit: type=1400 audit(1696276556.259:687): avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.317407 kernel: audit: type=1400 audit(1696276556.259:687): avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.330361 kernel: audit: type=1400 audit(1696276556.259:687): avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.330489 kernel: audit: type=1400 audit(1696276556.259:687): avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.259000 audit: BPF prog-id=84 op=LOAD Oct 2 19:55:56.330902 env[1633]: time="2023-10-02T19:55:56.330803181Z" level=info msg="StartContainer for \"d551b5a066f3012a3447d3462450de88d031d32238db5e44ef74ccdf1a9b945c\" returns successfully" Oct 2 19:55:56.259000 audit[2350]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00031f220 items=0 ppid=2208 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435353162356130363666333031326133343437643334363234353064 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit: BPF prog-id=85 op=LOAD Oct 2 19:55:56.265000 audit[2350]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c00031f268 items=0 ppid=2208 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435353162356130363666333031326133343437643334363234353064 Oct 2 19:55:56.265000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:55:56.265000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { perfmon } for pid=2350 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit[2350]: AVC avc: denied { bpf } for pid=2350 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:56.265000 audit: BPF prog-id=86 op=LOAD Oct 2 19:55:56.265000 audit[2350]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00031f2f8 items=0 ppid=2208 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435353162356130363666333031326133343437643334363234353064 Oct 2 19:55:56.372329 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:55:56.372551 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:55:56.386359 kernel: IPVS: ipvs loaded. Oct 2 19:55:56.410110 kernel: IPVS: [rr] scheduler registered. Oct 2 19:55:56.421419 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:55:56.430105 kernel: IPVS: [sh] scheduler registered. Oct 2 19:55:56.491000 audit[2408]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.491000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe416c01e0 a2=0 a3=7ffe416c01cc items=0 ppid=2361 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:55:56.499000 audit[2409]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.499000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce16063e0 a2=0 a3=7ffce16063cc items=0 ppid=2361 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.499000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:55:56.504000 audit[2410]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.504000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee56d74b0 a2=0 a3=7ffee56d749c items=0 ppid=2361 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:55:56.509000 audit[2411]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.509000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcec5dfe0 a2=0 a3=7ffdcec5dfcc items=0 ppid=2361 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:55:56.513000 audit[2412]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.513000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff32702a70 a2=0 a3=7fff32702a5c items=0 ppid=2361 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:55:56.515000 audit[2413]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.515000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4351ca90 a2=0 a3=7ffe4351ca7c items=0 ppid=2361 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:55:56.530324 kubelet[2093]: E1002 19:55:56.530151 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:55:56.609000 audit[2414]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.609000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcca168b10 a2=0 a3=7ffcca168afc items=0 ppid=2361 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:55:56.614000 audit[2416]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.614000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff74a2c640 a2=0 a3=7fff74a2c62c items=0 ppid=2361 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.614000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:55:56.619000 audit[2419]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.619000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcce368bb0 a2=0 a3=7ffcce368b9c items=0 ppid=2361 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:55:56.622000 audit[2420]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.622000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1eb955d0 a2=0 a3=7ffe1eb955bc items=0 ppid=2361 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.622000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:55:56.625000 audit[2422]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.625000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff89fc4d20 a2=0 a3=7fff89fc4d0c items=0 ppid=2361 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:55:56.628000 audit[2423]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.628000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3acb9cd0 a2=0 a3=7fff3acb9cbc items=0 ppid=2361 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:55:56.632000 audit[2425]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.632000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe80bec7a0 a2=0 a3=7ffe80bec78c items=0 ppid=2361 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:55:56.639000 audit[2428]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.639000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff2dc15540 a2=0 a3=7fff2dc1552c items=0 ppid=2361 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:55:56.642000 audit[2429]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.642000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff56012da0 a2=0 a3=7fff56012d8c items=0 ppid=2361 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.642000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:55:56.647000 audit[2431]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.647000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe066e0760 a2=0 a3=7ffe066e074c items=0 ppid=2361 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:55:56.648000 audit[2432]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.648000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd3c99120 a2=0 a3=7ffcd3c9910c items=0 ppid=2361 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:55:56.652000 audit[2434]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.652000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0f5c0b60 a2=0 a3=7ffd0f5c0b4c items=0 ppid=2361 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.652000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:55:56.661000 audit[2437]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.661000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffecc989700 a2=0 a3=7ffecc9896ec items=0 ppid=2361 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:55:56.668000 audit[2440]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.668000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffabd9d320 a2=0 a3=7fffabd9d30c items=0 ppid=2361 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:55:56.670000 audit[2441]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.670000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeef12d8e0 a2=0 a3=7ffeef12d8cc items=0 ppid=2361 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:55:56.675000 audit[2443]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.675000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc94ad2b30 a2=0 a3=7ffc94ad2b1c items=0 ppid=2361 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.675000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:55:56.684000 audit[2446]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:56.684000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff745461a0 a2=0 a3=7fff7454618c items=0 ppid=2361 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:55:56.707000 audit[2450]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:55:56.707000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff91794d40 a2=0 a3=7fff91794d2c items=0 ppid=2361 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:55:56.719000 audit[2450]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:55:56.719000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff91794d40 a2=0 a3=7fff91794d2c items=0 ppid=2361 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:55:56.796000 audit[2479]: NETFILTER_CFG table=filter:60 family=2 entries=12 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:55:56.796000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe0badbb00 a2=0 a3=7ffe0badbaec items=0 ppid=2361 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:55:56.798000 audit[2479]: NETFILTER_CFG table=nat:61 family=2 entries=20 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:55:56.798000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe0badbb00 a2=0 a3=7ffe0badbaec items=0 ppid=2361 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:55:56.817000 audit[2480]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.817000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcb61f9ad0 a2=0 a3=7ffcb61f9abc items=0 ppid=2361 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:55:56.832000 audit[2482]: NETFILTER_CFG table=filter:63 family=10 entries=2 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.832000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff3107af40 a2=0 a3=7fff3107af2c items=0 ppid=2361 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.832000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:55:56.845000 audit[2486]: NETFILTER_CFG table=filter:64 family=10 entries=2 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.845000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffedaa8d580 a2=0 a3=7ffedaa8d56c items=0 ppid=2361 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:55:56.848000 audit[2487]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.848000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe4b06250 a2=0 a3=7fffe4b0623c items=0 ppid=2361 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:55:56.852000 audit[2489]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.852000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd809edb0 a2=0 a3=7ffcd809ed9c items=0 ppid=2361 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.852000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:55:56.853000 audit[2490]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.853000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4d315300 a2=0 a3=7ffd4d3152ec items=0 ppid=2361 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.853000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:55:56.858000 audit[2492]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.858000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdc4e07f60 a2=0 a3=7ffdc4e07f4c items=0 ppid=2361 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:55:56.865000 audit[2495]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.865000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff30295810 a2=0 a3=7fff302957fc items=0 ppid=2361 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:55:56.869000 audit[2496]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.869000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdeb5eb950 a2=0 a3=7ffdeb5eb93c items=0 ppid=2361 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:55:56.877000 audit[2498]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.877000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfcfbcd50 a2=0 a3=7ffdfcfbcd3c items=0 ppid=2361 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:55:56.884000 audit[2499]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.884000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9bc940d0 a2=0 a3=7ffd9bc940bc items=0 ppid=2361 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:55:56.889000 audit[2501]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.889000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffba63e990 a2=0 a3=7fffba63e97c items=0 ppid=2361 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:55:56.898000 audit[2504]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.898000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe87e10fb0 a2=0 a3=7ffe87e10f9c items=0 ppid=2361 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:55:56.905000 audit[2507]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.905000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9bece7a0 a2=0 a3=7fff9bece78c items=0 ppid=2361 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:55:56.908000 audit[2508]: NETFILTER_CFG table=nat:76 family=10 entries=1 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.908000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5b510b60 a2=0 a3=7fff5b510b4c items=0 ppid=2361 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:55:56.913000 audit[2510]: NETFILTER_CFG table=nat:77 family=10 entries=2 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.913000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff263b00b0 a2=0 a3=7fff263b009c items=0 ppid=2361 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:55:56.917000 audit[2513]: NETFILTER_CFG table=nat:78 family=10 entries=2 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:56.917000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffedef941e0 a2=0 a3=7ffedef941cc items=0 ppid=2361 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:55:56.926000 audit[2517]: NETFILTER_CFG table=filter:79 family=10 entries=3 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:55:56.926000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe62707540 a2=0 a3=7ffe6270752c items=0 ppid=2361 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.926000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:55:56.927000 audit[2517]: NETFILTER_CFG table=nat:80 family=10 entries=10 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:55:56.927000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7ffe62707540 a2=0 a3=7ffe6270752c items=0 ppid=2361 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:56.927000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:55:57.170150 kubelet[2093]: E1002 19:55:57.169954 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:57.311979 kubelet[2093]: E1002 19:55:57.311950 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:57.386119 kubelet[2093]: W1002 19:55:57.385857 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8ddd69_00be_4891_b491_0a395b851c77.slice/cri-containerd-aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363.scope WatchSource:0}: container "aaddc50409ddc69c4f2e4f0643d778ab574afeac9861e64f6caff7c6ecad0363" in namespace "k8s.io": not found Oct 2 19:55:57.472917 update_engine[1628]: I1002 19:55:57.472589 1628 update_attempter.cc:505] Updating boot flags... Oct 2 19:55:58.171115 kubelet[2093]: E1002 19:55:58.171060 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:59.172039 kubelet[2093]: E1002 19:55:59.171987 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:00.172539 kubelet[2093]: E1002 19:56:00.172497 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:00.495559 kubelet[2093]: W1002 19:56:00.495187 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8ddd69_00be_4891_b491_0a395b851c77.slice/cri-containerd-927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd.scope WatchSource:0}: task 927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd not found: not found Oct 2 19:56:01.173422 kubelet[2093]: E1002 19:56:01.173363 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:02.174058 kubelet[2093]: E1002 19:56:02.173995 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:02.314216 kubelet[2093]: E1002 19:56:02.314182 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:03.175309 kubelet[2093]: E1002 19:56:03.175260 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:04.176779 kubelet[2093]: E1002 19:56:04.176725 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:05.177148 kubelet[2093]: E1002 19:56:05.177099 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:06.177918 kubelet[2093]: E1002 19:56:06.177864 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:07.145290 kubelet[2093]: E1002 19:56:07.145236 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:07.178573 kubelet[2093]: E1002 19:56:07.178523 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:07.314821 kubelet[2093]: E1002 19:56:07.314780 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:08.179372 kubelet[2093]: E1002 19:56:08.179322 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:08.433907 env[1633]: time="2023-10-02T19:56:08.433795362Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:56:08.463547 env[1633]: time="2023-10-02T19:56:08.463380967Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\"" Oct 2 19:56:08.464863 env[1633]: time="2023-10-02T19:56:08.464828148Z" level=info msg="StartContainer for \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\"" Oct 2 19:56:08.532152 systemd[1]: Started cri-containerd-37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3.scope. Oct 2 19:56:08.553841 systemd[1]: cri-containerd-37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3.scope: Deactivated successfully. Oct 2 19:56:08.798384 env[1633]: time="2023-10-02T19:56:08.798315359Z" level=info msg="shim disconnected" id=37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3 Oct 2 19:56:08.798384 env[1633]: time="2023-10-02T19:56:08.798381032Z" level=warning msg="cleaning up after shim disconnected" id=37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3 namespace=k8s.io Oct 2 19:56:08.798794 env[1633]: time="2023-10-02T19:56:08.798394825Z" level=info msg="cleaning up dead shim" Oct 2 19:56:08.831365 env[1633]: time="2023-10-02T19:56:08.831310530Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2639 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:08.832613 env[1633]: time="2023-10-02T19:56:08.832535427Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:56:08.836323 env[1633]: time="2023-10-02T19:56:08.836169912Z" level=error msg="Failed to pipe stdout of container \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\"" error="reading from a closed fifo" Oct 2 19:56:08.836816 env[1633]: time="2023-10-02T19:56:08.836492102Z" level=error msg="Failed to pipe stderr of container \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\"" error="reading from a closed fifo" Oct 2 19:56:08.839457 env[1633]: time="2023-10-02T19:56:08.839270324Z" level=error msg="StartContainer for \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:08.839718 kubelet[2093]: E1002 19:56:08.839689 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3" Oct 2 19:56:08.840848 kubelet[2093]: E1002 19:56:08.840812 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:08.840848 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:08.840848 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:56:08.840848 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mprk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:08.842183 kubelet[2093]: E1002 19:56:08.840889 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:56:09.179718 kubelet[2093]: E1002 19:56:09.179565 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:09.450108 systemd[1]: run-containerd-runc-k8s.io-37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3-runc.nhWvdP.mount: Deactivated successfully. Oct 2 19:56:09.450232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3-rootfs.mount: Deactivated successfully. Oct 2 19:56:09.574956 kubelet[2093]: I1002 19:56:09.574922 2093 scope.go:115] "RemoveContainer" containerID="927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd" Oct 2 19:56:09.575342 kubelet[2093]: I1002 19:56:09.575317 2093 scope.go:115] "RemoveContainer" containerID="927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd" Oct 2 19:56:09.576529 env[1633]: time="2023-10-02T19:56:09.576495899Z" level=info msg="RemoveContainer for \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\"" Oct 2 19:56:09.577549 env[1633]: time="2023-10-02T19:56:09.577515070Z" level=info msg="RemoveContainer for \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\"" Oct 2 19:56:09.577820 env[1633]: time="2023-10-02T19:56:09.577783216Z" level=error msg="RemoveContainer for \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\" failed" error="failed to set removing state for container \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\": container is already in removing state" Oct 2 19:56:09.577949 kubelet[2093]: E1002 19:56:09.577927 2093 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\": container is already in removing state" containerID="927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd" Oct 2 19:56:09.578040 kubelet[2093]: E1002 19:56:09.577969 2093 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd": container is already in removing state; Skipping pod "cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)" Oct 2 19:56:09.578693 kubelet[2093]: E1002 19:56:09.578281 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:56:09.581028 env[1633]: time="2023-10-02T19:56:09.580699408Z" level=info msg="RemoveContainer for \"927c81ef1185016c171c491486137b7b29fe18f28767bff20e7ec8a4b4ac85fd\" returns successfully" Oct 2 19:56:10.179831 kubelet[2093]: E1002 19:56:10.179769 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:11.180603 kubelet[2093]: E1002 19:56:11.180542 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:11.904904 kubelet[2093]: W1002 19:56:11.904861 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8ddd69_00be_4891_b491_0a395b851c77.slice/cri-containerd-37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3.scope WatchSource:0}: task 37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3 not found: not found Oct 2 19:56:12.181817 kubelet[2093]: E1002 19:56:12.181576 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:12.316349 kubelet[2093]: E1002 19:56:12.316313 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:13.181865 kubelet[2093]: E1002 19:56:13.181726 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:14.182125 kubelet[2093]: E1002 19:56:14.182061 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:15.183321 kubelet[2093]: E1002 19:56:15.183274 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:16.184314 kubelet[2093]: E1002 19:56:16.184232 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:17.185714 kubelet[2093]: E1002 19:56:17.185210 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:17.317160 kubelet[2093]: E1002 19:56:17.317124 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:18.186094 kubelet[2093]: E1002 19:56:18.186036 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:19.187125 kubelet[2093]: E1002 19:56:19.187069 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:20.187995 kubelet[2093]: E1002 19:56:20.187945 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:21.188558 kubelet[2093]: E1002 19:56:21.188506 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:22.189520 kubelet[2093]: E1002 19:56:22.189471 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:22.318724 kubelet[2093]: E1002 19:56:22.318691 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:23.190108 kubelet[2093]: E1002 19:56:23.190056 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:24.190398 kubelet[2093]: E1002 19:56:24.190357 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:24.432663 kubelet[2093]: E1002 19:56:24.432631 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:56:25.191306 kubelet[2093]: E1002 19:56:25.191253 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:26.192036 kubelet[2093]: E1002 19:56:26.191986 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:27.145102 kubelet[2093]: E1002 19:56:27.145053 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:27.192445 kubelet[2093]: E1002 19:56:27.192270 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:27.319890 kubelet[2093]: E1002 19:56:27.319857 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:28.193089 kubelet[2093]: E1002 19:56:28.193031 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:29.193539 kubelet[2093]: E1002 19:56:29.193487 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:30.194380 kubelet[2093]: E1002 19:56:30.194322 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:31.195200 kubelet[2093]: E1002 19:56:31.195150 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:32.195394 kubelet[2093]: E1002 19:56:32.195339 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:32.321155 kubelet[2093]: E1002 19:56:32.321130 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:33.195740 kubelet[2093]: E1002 19:56:33.195692 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:34.196173 kubelet[2093]: E1002 19:56:34.196127 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:35.197231 kubelet[2093]: E1002 19:56:35.197178 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:35.436258 env[1633]: time="2023-10-02T19:56:35.436165505Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:56:35.460929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261304412.mount: Deactivated successfully. Oct 2 19:56:35.472183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791679640.mount: Deactivated successfully. Oct 2 19:56:35.477842 env[1633]: time="2023-10-02T19:56:35.477658257Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\"" Oct 2 19:56:35.480014 env[1633]: time="2023-10-02T19:56:35.479798975Z" level=info msg="StartContainer for \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\"" Oct 2 19:56:35.507900 systemd[1]: Started cri-containerd-a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce.scope. Oct 2 19:56:35.522793 systemd[1]: cri-containerd-a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce.scope: Deactivated successfully. Oct 2 19:56:35.541780 env[1633]: time="2023-10-02T19:56:35.541681950Z" level=info msg="shim disconnected" id=a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce Oct 2 19:56:35.541780 env[1633]: time="2023-10-02T19:56:35.541771016Z" level=warning msg="cleaning up after shim disconnected" id=a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce namespace=k8s.io Oct 2 19:56:35.541780 env[1633]: time="2023-10-02T19:56:35.541783697Z" level=info msg="cleaning up dead shim" Oct 2 19:56:35.551071 env[1633]: time="2023-10-02T19:56:35.551010899Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2681 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:35.551381 env[1633]: time="2023-10-02T19:56:35.551316807Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:56:35.554187 env[1633]: time="2023-10-02T19:56:35.554126442Z" level=error msg="Failed to pipe stdout of container \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\"" error="reading from a closed fifo" Oct 2 19:56:35.554299 env[1633]: time="2023-10-02T19:56:35.554223513Z" level=error msg="Failed to pipe stderr of container \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\"" error="reading from a closed fifo" Oct 2 19:56:35.559348 env[1633]: time="2023-10-02T19:56:35.559283962Z" level=error msg="StartContainer for \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:35.559558 kubelet[2093]: E1002 19:56:35.559534 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce" Oct 2 19:56:35.559677 kubelet[2093]: E1002 19:56:35.559665 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:35.559677 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:35.559677 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:56:35.559677 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mprk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:35.559990 kubelet[2093]: E1002 19:56:35.559714 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:56:35.629200 kubelet[2093]: I1002 19:56:35.629019 2093 scope.go:115] "RemoveContainer" containerID="37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3" Oct 2 19:56:35.629964 kubelet[2093]: I1002 19:56:35.629510 2093 scope.go:115] "RemoveContainer" containerID="37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3" Oct 2 19:56:35.631947 env[1633]: time="2023-10-02T19:56:35.631886942Z" level=info msg="RemoveContainer for \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\"" Oct 2 19:56:35.632115 env[1633]: time="2023-10-02T19:56:35.632091170Z" level=info msg="RemoveContainer for \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\"" Oct 2 19:56:35.632304 env[1633]: time="2023-10-02T19:56:35.632255314Z" level=error msg="RemoveContainer for \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\" failed" error="failed to set removing state for container \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\": container is already in removing state" Oct 2 19:56:35.632534 kubelet[2093]: E1002 19:56:35.632514 2093 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\": container is already in removing state" containerID="37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3" Oct 2 19:56:35.632620 kubelet[2093]: I1002 19:56:35.632555 2093 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3} err="rpc error: code = Unknown desc = failed to set removing state for container \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\": container is already in removing state" Oct 2 19:56:35.635725 env[1633]: time="2023-10-02T19:56:35.635685784Z" level=info msg="RemoveContainer for \"37c45f58f75a5dc2c76e490c59318c4492e6b2dc660410d344b8c38279c676a3\" returns successfully" Oct 2 19:56:35.636314 kubelet[2093]: E1002 19:56:35.636289 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:56:36.198160 kubelet[2093]: E1002 19:56:36.198119 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:36.447469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce-rootfs.mount: Deactivated successfully. Oct 2 19:56:37.198982 kubelet[2093]: E1002 19:56:37.198930 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:37.323275 kubelet[2093]: E1002 19:56:37.323243 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:38.200064 kubelet[2093]: E1002 19:56:38.200014 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:38.647425 kubelet[2093]: W1002 19:56:38.647379 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8ddd69_00be_4891_b491_0a395b851c77.slice/cri-containerd-a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce.scope WatchSource:0}: task a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce not found: not found Oct 2 19:56:39.200591 kubelet[2093]: E1002 19:56:39.200538 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:40.201726 kubelet[2093]: E1002 19:56:40.201652 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:41.202752 kubelet[2093]: E1002 19:56:41.202702 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:42.203501 kubelet[2093]: E1002 19:56:42.203455 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:42.324893 kubelet[2093]: E1002 19:56:42.324860 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:43.203739 kubelet[2093]: E1002 19:56:43.203685 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:44.203926 kubelet[2093]: E1002 19:56:44.203883 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:45.204814 kubelet[2093]: E1002 19:56:45.204761 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:46.205838 kubelet[2093]: E1002 19:56:46.205784 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:47.145343 kubelet[2093]: E1002 19:56:47.145293 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:47.206741 kubelet[2093]: E1002 19:56:47.206691 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:47.325511 kubelet[2093]: E1002 19:56:47.325475 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:48.207803 kubelet[2093]: E1002 19:56:48.207757 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:49.208495 kubelet[2093]: E1002 19:56:49.208442 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:49.432423 kubelet[2093]: E1002 19:56:49.431908 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:56:50.209473 kubelet[2093]: E1002 19:56:50.209435 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:51.210014 kubelet[2093]: E1002 19:56:51.209961 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.210357 kubelet[2093]: E1002 19:56:52.210309 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.326688 kubelet[2093]: E1002 19:56:52.326658 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:53.213750 kubelet[2093]: E1002 19:56:53.213696 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:54.214221 kubelet[2093]: E1002 19:56:54.214171 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:55.214585 kubelet[2093]: E1002 19:56:55.214547 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:56.214788 kubelet[2093]: E1002 19:56:56.214745 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:57.215434 kubelet[2093]: E1002 19:56:57.215380 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:57.327782 kubelet[2093]: E1002 19:56:57.327750 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:58.215665 kubelet[2093]: E1002 19:56:58.215615 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:59.216782 kubelet[2093]: E1002 19:56:59.216701 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:00.217188 kubelet[2093]: E1002 19:57:00.217143 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:01.218026 kubelet[2093]: E1002 19:57:01.217989 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:02.218908 kubelet[2093]: E1002 19:57:02.218841 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:02.331098 kubelet[2093]: E1002 19:57:02.331044 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:03.219738 kubelet[2093]: E1002 19:57:03.219680 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:03.432056 kubelet[2093]: E1002 19:57:03.432020 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:57:04.220862 kubelet[2093]: E1002 19:57:04.220810 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:05.221650 kubelet[2093]: E1002 19:57:05.221604 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:06.222113 kubelet[2093]: E1002 19:57:06.222052 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.145095 kubelet[2093]: E1002 19:57:07.145043 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.222518 kubelet[2093]: E1002 19:57:07.222463 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.332097 kubelet[2093]: E1002 19:57:07.332058 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:08.223019 kubelet[2093]: E1002 19:57:08.222963 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:09.223369 kubelet[2093]: E1002 19:57:09.223313 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:10.224518 kubelet[2093]: E1002 19:57:10.224464 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:11.225659 kubelet[2093]: E1002 19:57:11.225606 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:12.226780 kubelet[2093]: E1002 19:57:12.226730 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:12.333111 kubelet[2093]: E1002 19:57:12.332999 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:13.227171 kubelet[2093]: E1002 19:57:13.227116 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:14.228124 kubelet[2093]: E1002 19:57:14.228069 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:15.228961 kubelet[2093]: E1002 19:57:15.228904 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:16.229280 kubelet[2093]: E1002 19:57:16.229228 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:16.438802 env[1633]: time="2023-10-02T19:57:16.438748314Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:57:16.475915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673295106.mount: Deactivated successfully. Oct 2 19:57:16.481260 env[1633]: time="2023-10-02T19:57:16.481134809Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\"" Oct 2 19:57:16.482255 env[1633]: time="2023-10-02T19:57:16.482069471Z" level=info msg="StartContainer for \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\"" Oct 2 19:57:16.520473 systemd[1]: Started cri-containerd-e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654.scope. Oct 2 19:57:16.536220 systemd[1]: cri-containerd-e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654.scope: Deactivated successfully. Oct 2 19:57:16.542952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654-rootfs.mount: Deactivated successfully. Oct 2 19:57:16.553989 env[1633]: time="2023-10-02T19:57:16.553932263Z" level=info msg="shim disconnected" id=e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654 Oct 2 19:57:16.553989 env[1633]: time="2023-10-02T19:57:16.553988025Z" level=warning msg="cleaning up after shim disconnected" id=e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654 namespace=k8s.io Oct 2 19:57:16.554540 env[1633]: time="2023-10-02T19:57:16.553999584Z" level=info msg="cleaning up dead shim" Oct 2 19:57:16.567496 env[1633]: time="2023-10-02T19:57:16.567443765Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2721 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:16.567889 env[1633]: time="2023-10-02T19:57:16.567823509Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:57:16.569244 env[1633]: time="2023-10-02T19:57:16.569190071Z" level=error msg="Failed to pipe stdout of container \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\"" error="reading from a closed fifo" Oct 2 19:57:16.569687 env[1633]: time="2023-10-02T19:57:16.569546152Z" level=error msg="Failed to pipe stderr of container \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\"" error="reading from a closed fifo" Oct 2 19:57:16.571824 env[1633]: time="2023-10-02T19:57:16.571739019Z" level=error msg="StartContainer for \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:16.572490 kubelet[2093]: E1002 19:57:16.572414 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654" Oct 2 19:57:16.573133 kubelet[2093]: E1002 19:57:16.573064 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:16.573133 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:16.573133 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:57:16.573133 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mprk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:16.573403 kubelet[2093]: E1002 19:57:16.573163 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:57:16.710023 kubelet[2093]: I1002 19:57:16.709982 2093 scope.go:115] "RemoveContainer" containerID="a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce" Oct 2 19:57:16.710917 kubelet[2093]: I1002 19:57:16.710536 2093 scope.go:115] "RemoveContainer" containerID="a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce" Oct 2 19:57:16.723914 env[1633]: time="2023-10-02T19:57:16.723868267Z" level=info msg="RemoveContainer for \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\"" Oct 2 19:57:16.725154 env[1633]: time="2023-10-02T19:57:16.725055689Z" level=info msg="RemoveContainer for \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\"" Oct 2 19:57:16.725311 env[1633]: time="2023-10-02T19:57:16.725223294Z" level=error msg="RemoveContainer for \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\" failed" error="failed to set removing state for container \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\": container is already in removing state" Oct 2 19:57:16.726463 kubelet[2093]: E1002 19:57:16.725469 2093 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\": container is already in removing state" containerID="a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce" Oct 2 19:57:16.729000 kubelet[2093]: E1002 19:57:16.728977 2093 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce": container is already in removing state; Skipping pod "cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)" Oct 2 19:57:16.729634 kubelet[2093]: E1002 19:57:16.729611 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:57:16.733031 env[1633]: time="2023-10-02T19:57:16.731958118Z" level=info msg="RemoveContainer for \"a6dc8c760a56ac9fd30794e076a1ad898df228d60fda3e27d6af997f5c1c63ce\" returns successfully" Oct 2 19:57:17.229857 kubelet[2093]: E1002 19:57:17.229804 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:17.334515 kubelet[2093]: E1002 19:57:17.334469 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:18.230104 kubelet[2093]: E1002 19:57:18.230039 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:19.231192 kubelet[2093]: E1002 19:57:19.231142 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:19.661825 kubelet[2093]: W1002 19:57:19.661777 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8ddd69_00be_4891_b491_0a395b851c77.slice/cri-containerd-e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654.scope WatchSource:0}: task e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654 not found: not found Oct 2 19:57:20.232296 kubelet[2093]: E1002 19:57:20.232227 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:21.233352 kubelet[2093]: E1002 19:57:21.233261 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:22.234383 kubelet[2093]: E1002 19:57:22.234327 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:22.335734 kubelet[2093]: E1002 19:57:22.335699 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:23.235029 kubelet[2093]: E1002 19:57:23.234975 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:24.235799 kubelet[2093]: E1002 19:57:24.235754 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:25.236820 kubelet[2093]: E1002 19:57:25.236762 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:26.237286 kubelet[2093]: E1002 19:57:26.237186 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.145194 kubelet[2093]: E1002 19:57:27.145145 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.237526 kubelet[2093]: E1002 19:57:27.237472 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.337450 kubelet[2093]: E1002 19:57:27.337425 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:28.238340 kubelet[2093]: E1002 19:57:28.238283 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:29.238967 kubelet[2093]: E1002 19:57:29.238919 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:30.239310 kubelet[2093]: E1002 19:57:30.239268 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:31.240199 kubelet[2093]: E1002 19:57:31.240147 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:31.432208 kubelet[2093]: E1002 19:57:31.432179 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:57:32.240898 kubelet[2093]: E1002 19:57:32.240831 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:32.339314 kubelet[2093]: E1002 19:57:32.339282 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:33.241269 kubelet[2093]: E1002 19:57:33.241217 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:34.241374 kubelet[2093]: E1002 19:57:34.241327 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:35.242061 kubelet[2093]: E1002 19:57:35.242010 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:36.243284 kubelet[2093]: E1002 19:57:36.243236 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:37.243607 kubelet[2093]: E1002 19:57:37.243556 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:37.340393 kubelet[2093]: E1002 19:57:37.340365 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:38.244090 kubelet[2093]: E1002 19:57:38.244017 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:39.245118 kubelet[2093]: E1002 19:57:39.245064 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:40.245260 kubelet[2093]: E1002 19:57:40.245207 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:41.245413 kubelet[2093]: E1002 19:57:41.245361 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:42.246312 kubelet[2093]: E1002 19:57:42.246259 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:42.341475 kubelet[2093]: E1002 19:57:42.341447 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:43.246943 kubelet[2093]: E1002 19:57:43.246888 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:44.248262 kubelet[2093]: E1002 19:57:44.248060 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:45.248287 kubelet[2093]: E1002 19:57:45.248231 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:45.432588 kubelet[2093]: E1002 19:57:45.432536 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:57:46.249372 kubelet[2093]: E1002 19:57:46.249317 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.145348 kubelet[2093]: E1002 19:57:47.145298 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.249971 kubelet[2093]: E1002 19:57:47.249914 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.342541 kubelet[2093]: E1002 19:57:47.342499 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:48.250928 kubelet[2093]: E1002 19:57:48.250843 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:49.251120 kubelet[2093]: E1002 19:57:49.251062 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:50.251325 kubelet[2093]: E1002 19:57:50.251272 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:51.252332 kubelet[2093]: E1002 19:57:51.252276 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:52.253264 kubelet[2093]: E1002 19:57:52.253068 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:52.343376 kubelet[2093]: E1002 19:57:52.343344 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:53.254030 kubelet[2093]: E1002 19:57:53.253989 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:54.254808 kubelet[2093]: E1002 19:57:54.254761 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:55.255679 kubelet[2093]: E1002 19:57:55.255623 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:56.255854 kubelet[2093]: E1002 19:57:56.255802 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:56.431466 kubelet[2093]: E1002 19:57:56.431419 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:57:57.256424 kubelet[2093]: E1002 19:57:57.256369 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:57.345013 kubelet[2093]: E1002 19:57:57.344975 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:58.256995 kubelet[2093]: E1002 19:57:58.256946 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:59.258096 kubelet[2093]: E1002 19:57:59.258047 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:00.258403 kubelet[2093]: E1002 19:58:00.258345 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:01.258997 kubelet[2093]: E1002 19:58:01.258945 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:02.260102 kubelet[2093]: E1002 19:58:02.259914 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:02.346754 kubelet[2093]: E1002 19:58:02.346664 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:03.260812 kubelet[2093]: E1002 19:58:03.260750 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:04.260990 kubelet[2093]: E1002 19:58:04.260932 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:05.262117 kubelet[2093]: E1002 19:58:05.262054 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:06.263296 kubelet[2093]: E1002 19:58:06.263232 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.144787 kubelet[2093]: E1002 19:58:07.144738 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.263483 kubelet[2093]: E1002 19:58:07.263431 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.347902 kubelet[2093]: E1002 19:58:07.347872 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:08.263742 kubelet[2093]: E1002 19:58:08.263689 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:09.264697 kubelet[2093]: E1002 19:58:09.264643 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:09.432627 kubelet[2093]: E1002 19:58:09.432593 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:58:10.265046 kubelet[2093]: E1002 19:58:10.264993 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:11.266024 kubelet[2093]: E1002 19:58:11.265928 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.267210 kubelet[2093]: E1002 19:58:12.266794 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.349084 kubelet[2093]: E1002 19:58:12.349040 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:13.268650 kubelet[2093]: E1002 19:58:13.268502 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:14.269754 kubelet[2093]: E1002 19:58:14.269703 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:15.270884 kubelet[2093]: E1002 19:58:15.270835 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:16.271267 kubelet[2093]: E1002 19:58:16.271218 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:17.272404 kubelet[2093]: E1002 19:58:17.272349 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:17.351227 kubelet[2093]: E1002 19:58:17.351189 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:18.272565 kubelet[2093]: E1002 19:58:18.272514 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:19.273584 kubelet[2093]: E1002 19:58:19.273531 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:20.274729 kubelet[2093]: E1002 19:58:20.274673 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:20.431499 kubelet[2093]: E1002 19:58:20.431457 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:58:21.275678 kubelet[2093]: E1002 19:58:21.275624 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:22.275949 kubelet[2093]: E1002 19:58:22.275898 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:22.352211 kubelet[2093]: E1002 19:58:22.352180 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:23.276868 kubelet[2093]: E1002 19:58:23.276815 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:24.277194 kubelet[2093]: E1002 19:58:24.277150 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:25.277517 kubelet[2093]: E1002 19:58:25.277467 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:26.277695 kubelet[2093]: E1002 19:58:26.277588 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.145094 kubelet[2093]: E1002 19:58:27.145030 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.278459 kubelet[2093]: E1002 19:58:27.278404 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.353771 kubelet[2093]: E1002 19:58:27.353731 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:28.278924 kubelet[2093]: E1002 19:58:28.278875 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:29.279202 kubelet[2093]: E1002 19:58:29.279152 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:30.280341 kubelet[2093]: E1002 19:58:30.280197 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:31.281199 kubelet[2093]: E1002 19:58:31.281157 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:32.281363 kubelet[2093]: E1002 19:58:32.281320 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:32.355724 kubelet[2093]: E1002 19:58:32.355638 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:32.432066 kubelet[2093]: E1002 19:58:32.432023 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:58:33.282166 kubelet[2093]: E1002 19:58:33.282115 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:34.282548 kubelet[2093]: E1002 19:58:34.282494 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:35.283190 kubelet[2093]: E1002 19:58:35.283138 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:36.283605 kubelet[2093]: E1002 19:58:36.283548 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:37.284048 kubelet[2093]: E1002 19:58:37.283997 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:37.356788 kubelet[2093]: E1002 19:58:37.356752 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:38.285146 kubelet[2093]: E1002 19:58:38.285097 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:39.285350 kubelet[2093]: E1002 19:58:39.285296 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:40.286376 kubelet[2093]: E1002 19:58:40.286329 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:41.287620 kubelet[2093]: E1002 19:58:41.287543 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:42.288433 kubelet[2093]: E1002 19:58:42.288373 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:42.358582 kubelet[2093]: E1002 19:58:42.358550 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:43.289412 kubelet[2093]: E1002 19:58:43.289359 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:44.289856 kubelet[2093]: E1002 19:58:44.289774 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:44.435363 env[1633]: time="2023-10-02T19:58:44.435296673Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:58:44.470557 env[1633]: time="2023-10-02T19:58:44.470504765Z" level=info msg="CreateContainer within sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c\"" Oct 2 19:58:44.472335 env[1633]: time="2023-10-02T19:58:44.472223614Z" level=info msg="StartContainer for \"8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c\"" Oct 2 19:58:44.500030 systemd[1]: Started cri-containerd-8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c.scope. Oct 2 19:58:44.514796 systemd[1]: cri-containerd-8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c.scope: Deactivated successfully. Oct 2 19:58:44.520152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c-rootfs.mount: Deactivated successfully. Oct 2 19:58:44.538371 env[1633]: time="2023-10-02T19:58:44.538304819Z" level=info msg="shim disconnected" id=8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c Oct 2 19:58:44.538371 env[1633]: time="2023-10-02T19:58:44.538368223Z" level=warning msg="cleaning up after shim disconnected" id=8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c namespace=k8s.io Oct 2 19:58:44.538962 env[1633]: time="2023-10-02T19:58:44.538381014Z" level=info msg="cleaning up dead shim" Oct 2 19:58:44.550538 env[1633]: time="2023-10-02T19:58:44.550407886Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:58:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2767 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:58:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:58:44.551492 env[1633]: time="2023-10-02T19:58:44.551420881Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:58:44.556204 env[1633]: time="2023-10-02T19:58:44.556134791Z" level=error msg="Failed to pipe stdout of container \"8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c\"" error="reading from a closed fifo" Oct 2 19:58:44.557200 env[1633]: time="2023-10-02T19:58:44.557155656Z" level=error msg="Failed to pipe stderr of container \"8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c\"" error="reading from a closed fifo" Oct 2 19:58:44.559509 env[1633]: time="2023-10-02T19:58:44.559464268Z" level=error msg="StartContainer for \"8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:58:44.559713 kubelet[2093]: E1002 19:58:44.559694 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c" Oct 2 19:58:44.559843 kubelet[2093]: E1002 19:58:44.559822 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:58:44.559843 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:58:44.559843 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:58:44.559843 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mprk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:58:44.560057 kubelet[2093]: E1002 19:58:44.559874 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:58:44.875900 kubelet[2093]: I1002 19:58:44.875212 2093 scope.go:115] "RemoveContainer" containerID="e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654" Oct 2 19:58:44.875900 kubelet[2093]: I1002 19:58:44.875743 2093 scope.go:115] "RemoveContainer" containerID="e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654" Oct 2 19:58:44.877648 env[1633]: time="2023-10-02T19:58:44.877599429Z" level=info msg="RemoveContainer for \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\"" Oct 2 19:58:44.878308 env[1633]: time="2023-10-02T19:58:44.878197716Z" level=info msg="RemoveContainer for \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\"" Oct 2 19:58:44.878423 env[1633]: time="2023-10-02T19:58:44.878379695Z" level=error msg="RemoveContainer for \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\" failed" error="failed to set removing state for container \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\": container is already in removing state" Oct 2 19:58:44.879102 kubelet[2093]: E1002 19:58:44.878651 2093 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\": container is already in removing state" containerID="e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654" Oct 2 19:58:44.879102 kubelet[2093]: E1002 19:58:44.878704 2093 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654": container is already in removing state; Skipping pod "cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)" Oct 2 19:58:44.879267 kubelet[2093]: E1002 19:58:44.879209 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-7n6mw_kube-system(6e8ddd69-00be-4891-b491-0a395b851c77)\"" pod="kube-system/cilium-7n6mw" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 Oct 2 19:58:44.883336 env[1633]: time="2023-10-02T19:58:44.883284153Z" level=info msg="RemoveContainer for \"e2607c34a9b1c3f37b4b127da47fbf9d5015c756bb6c94fa486ef00bae6f6654\" returns successfully" Oct 2 19:58:45.290220 kubelet[2093]: E1002 19:58:45.290171 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:46.291308 kubelet[2093]: E1002 19:58:46.291261 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.145221 kubelet[2093]: E1002 19:58:47.145167 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.292096 kubelet[2093]: E1002 19:58:47.292042 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.359407 kubelet[2093]: E1002 19:58:47.359381 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:47.645067 kubelet[2093]: W1002 19:58:47.645020 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8ddd69_00be_4891_b491_0a395b851c77.slice/cri-containerd-8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c.scope WatchSource:0}: task 8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c not found: not found Oct 2 19:58:48.292851 kubelet[2093]: E1002 19:58:48.292798 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:49.293681 kubelet[2093]: E1002 19:58:49.293633 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:50.294185 kubelet[2093]: E1002 19:58:50.294127 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:51.295087 kubelet[2093]: E1002 19:58:51.295029 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:52.296149 kubelet[2093]: E1002 19:58:52.296109 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:52.361033 kubelet[2093]: E1002 19:58:52.361003 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:52.825279 env[1633]: time="2023-10-02T19:58:52.825231645Z" level=info msg="StopPodSandbox for \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\"" Oct 2 19:58:52.827875 env[1633]: time="2023-10-02T19:58:52.825300786Z" level=info msg="Container to stop \"8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:58:52.827105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b-shm.mount: Deactivated successfully. Oct 2 19:58:52.835430 systemd[1]: cri-containerd-2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b.scope: Deactivated successfully. Oct 2 19:58:52.838589 kernel: kauditd_printk_skb: 171 callbacks suppressed Oct 2 19:58:52.838704 kernel: audit: type=1334 audit(1696276732.834:738): prog-id=76 op=UNLOAD Oct 2 19:58:52.834000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:58:52.841000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:58:52.845117 kernel: audit: type=1334 audit(1696276732.841:739): prog-id=79 op=UNLOAD Oct 2 19:58:52.871935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b-rootfs.mount: Deactivated successfully. Oct 2 19:58:52.889321 env[1633]: time="2023-10-02T19:58:52.889205655Z" level=info msg="shim disconnected" id=2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b Oct 2 19:58:52.889321 env[1633]: time="2023-10-02T19:58:52.889264810Z" level=warning msg="cleaning up after shim disconnected" id=2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b namespace=k8s.io Oct 2 19:58:52.889321 env[1633]: time="2023-10-02T19:58:52.889297481Z" level=info msg="cleaning up dead shim" Oct 2 19:58:52.898594 env[1633]: time="2023-10-02T19:58:52.898547953Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:58:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2799 runtime=io.containerd.runc.v2\n" Oct 2 19:58:52.899468 env[1633]: time="2023-10-02T19:58:52.899119377Z" level=info msg="TearDown network for sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" successfully" Oct 2 19:58:52.899468 env[1633]: time="2023-10-02T19:58:52.899191578Z" level=info msg="StopPodSandbox for \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" returns successfully" Oct 2 19:58:52.979339 kubelet[2093]: I1002 19:58:52.979292 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-hostproc\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979594 kubelet[2093]: I1002 19:58:52.979375 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mprk5\" (UniqueName: \"kubernetes.io/projected/6e8ddd69-00be-4891-b491-0a395b851c77-kube-api-access-mprk5\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979594 kubelet[2093]: I1002 19:58:52.979404 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-bpf-maps\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979594 kubelet[2093]: I1002 19:58:52.979428 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cni-path\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979594 kubelet[2093]: I1002 19:58:52.979452 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-host-proc-sys-net\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979594 kubelet[2093]: I1002 19:58:52.979477 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-cgroup\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979594 kubelet[2093]: I1002 19:58:52.979501 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-run\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979872 kubelet[2093]: I1002 19:58:52.979528 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-xtables-lock\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979872 kubelet[2093]: I1002 19:58:52.979561 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-host-proc-sys-kernel\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979872 kubelet[2093]: I1002 19:58:52.979594 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e8ddd69-00be-4891-b491-0a395b851c77-hubble-tls\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979872 kubelet[2093]: I1002 19:58:52.979626 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e8ddd69-00be-4891-b491-0a395b851c77-clustermesh-secrets\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979872 kubelet[2093]: I1002 19:58:52.979660 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-config-path\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.979872 kubelet[2093]: I1002 19:58:52.979687 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-lib-modules\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.982257 kubelet[2093]: I1002 19:58:52.979715 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-etc-cni-netd\") pod \"6e8ddd69-00be-4891-b491-0a395b851c77\" (UID: \"6e8ddd69-00be-4891-b491-0a395b851c77\") " Oct 2 19:58:52.982257 kubelet[2093]: I1002 19:58:52.979784 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.982257 kubelet[2093]: I1002 19:58:52.979831 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-hostproc" (OuterVolumeSpecName: "hostproc") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.982257 kubelet[2093]: I1002 19:58:52.980154 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.982257 kubelet[2093]: I1002 19:58:52.980213 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.982553 kubelet[2093]: I1002 19:58:52.980240 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cni-path" (OuterVolumeSpecName: "cni-path") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.982553 kubelet[2093]: I1002 19:58:52.980261 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.982553 kubelet[2093]: I1002 19:58:52.980302 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.982553 kubelet[2093]: I1002 19:58:52.980324 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.982553 kubelet[2093]: W1002 19:58:52.980671 2093 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6e8ddd69-00be-4891-b491-0a395b851c77/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:58:52.986161 kubelet[2093]: I1002 19:58:52.982846 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.984983 systemd[1]: var-lib-kubelet-pods-6e8ddd69\x2d00be\x2d4891\x2db491\x2d0a395b851c77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmprk5.mount: Deactivated successfully. Oct 2 19:58:52.986426 kubelet[2093]: I1002 19:58:52.986171 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:58:52.986426 kubelet[2093]: I1002 19:58:52.986271 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8ddd69-00be-4891-b491-0a395b851c77-kube-api-access-mprk5" (OuterVolumeSpecName: "kube-api-access-mprk5") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "kube-api-access-mprk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:58:52.988378 kubelet[2093]: I1002 19:58:52.988339 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:58:52.993862 kubelet[2093]: I1002 19:58:52.993822 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8ddd69-00be-4891-b491-0a395b851c77-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:58:52.995064 systemd[1]: var-lib-kubelet-pods-6e8ddd69\x2d00be\x2d4891\x2db491\x2d0a395b851c77-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:58:52.995222 systemd[1]: var-lib-kubelet-pods-6e8ddd69\x2d00be\x2d4891\x2db491\x2d0a395b851c77-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:58:53.001648 kubelet[2093]: I1002 19:58:53.001607 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8ddd69-00be-4891-b491-0a395b851c77-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6e8ddd69-00be-4891-b491-0a395b851c77" (UID: "6e8ddd69-00be-4891-b491-0a395b851c77"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:58:53.080127 kubelet[2093]: I1002 19:58:53.079860 2093 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-run\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080127 kubelet[2093]: I1002 19:58:53.079898 2093 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-xtables-lock\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080127 kubelet[2093]: I1002 19:58:53.079914 2093 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-host-proc-sys-kernel\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080127 kubelet[2093]: I1002 19:58:53.079927 2093 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e8ddd69-00be-4891-b491-0a395b851c77-hubble-tls\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080127 kubelet[2093]: I1002 19:58:53.079946 2093 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e8ddd69-00be-4891-b491-0a395b851c77-clustermesh-secrets\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080127 kubelet[2093]: I1002 19:58:53.079960 2093 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-config-path\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080127 kubelet[2093]: I1002 19:58:53.079973 2093 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-lib-modules\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080127 kubelet[2093]: I1002 19:58:53.079986 2093 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-etc-cni-netd\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080584 kubelet[2093]: I1002 19:58:53.079997 2093 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-hostproc\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080584 kubelet[2093]: I1002 19:58:53.080010 2093 reconciler.go:399] "Volume detached for volume \"kube-api-access-mprk5\" (UniqueName: \"kubernetes.io/projected/6e8ddd69-00be-4891-b491-0a395b851c77-kube-api-access-mprk5\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080584 kubelet[2093]: I1002 19:58:53.080024 2093 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-bpf-maps\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080584 kubelet[2093]: I1002 19:58:53.080037 2093 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cni-path\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080584 kubelet[2093]: I1002 19:58:53.080049 2093 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-host-proc-sys-net\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.080584 kubelet[2093]: I1002 19:58:53.080062 2093 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e8ddd69-00be-4891-b491-0a395b851c77-cilium-cgroup\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:58:53.297631 kubelet[2093]: E1002 19:58:53.297594 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:53.442511 systemd[1]: Removed slice kubepods-burstable-pod6e8ddd69_00be_4891_b491_0a395b851c77.slice. Oct 2 19:58:53.897262 kubelet[2093]: I1002 19:58:53.897232 2093 scope.go:115] "RemoveContainer" containerID="8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c" Oct 2 19:58:53.908517 env[1633]: time="2023-10-02T19:58:53.908458629Z" level=info msg="RemoveContainer for \"8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c\"" Oct 2 19:58:53.913769 env[1633]: time="2023-10-02T19:58:53.913637225Z" level=info msg="RemoveContainer for \"8ab6804fb5655071ba3364e17310cb9c6455626f570aa477a817a386523af18c\" returns successfully" Oct 2 19:58:54.298945 kubelet[2093]: E1002 19:58:54.298902 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:55.299320 kubelet[2093]: E1002 19:58:55.299269 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:55.434054 kubelet[2093]: I1002 19:58:55.434018 2093 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6e8ddd69-00be-4891-b491-0a395b851c77 path="/var/lib/kubelet/pods/6e8ddd69-00be-4891-b491-0a395b851c77/volumes" Oct 2 19:58:56.300407 kubelet[2093]: E1002 19:58:56.300362 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:57.235378 kubelet[2093]: I1002 19:58:57.235284 2093 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:58:57.235586 kubelet[2093]: E1002 19:58:57.235401 2093 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.235586 kubelet[2093]: E1002 19:58:57.235415 2093 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.235586 kubelet[2093]: E1002 19:58:57.235423 2093 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.235586 kubelet[2093]: I1002 19:58:57.235443 2093 memory_manager.go:345] "RemoveStaleState removing state" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.235586 kubelet[2093]: I1002 19:58:57.235452 2093 memory_manager.go:345] "RemoveStaleState removing state" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.235586 kubelet[2093]: I1002 19:58:57.235460 2093 memory_manager.go:345] "RemoveStaleState removing state" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.235586 kubelet[2093]: I1002 19:58:57.235469 2093 memory_manager.go:345] "RemoveStaleState removing state" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.240741 systemd[1]: Created slice kubepods-besteffort-pod193bdd93_1e36_4c1e_ba1b_d603917d4881.slice. Oct 2 19:58:57.299902 kubelet[2093]: I1002 19:58:57.299865 2093 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:58:57.300157 kubelet[2093]: E1002 19:58:57.299923 2093 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.300157 kubelet[2093]: E1002 19:58:57.299936 2093 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.300157 kubelet[2093]: I1002 19:58:57.299958 2093 memory_manager.go:345] "RemoveStaleState removing state" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.300157 kubelet[2093]: I1002 19:58:57.299966 2093 memory_manager.go:345] "RemoveStaleState removing state" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.300157 kubelet[2093]: E1002 19:58:57.299985 2093 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="6e8ddd69-00be-4891-b491-0a395b851c77" containerName="mount-cgroup" Oct 2 19:58:57.300584 kubelet[2093]: E1002 19:58:57.300498 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:57.306242 systemd[1]: Created slice kubepods-burstable-poda302d3b4_9c9c_4d55_88c6_b98b9d56dbdf.slice. Oct 2 19:58:57.308476 kubelet[2093]: I1002 19:58:57.308402 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxd46\" (UniqueName: \"kubernetes.io/projected/193bdd93-1e36-4c1e-ba1b-d603917d4881-kube-api-access-bxd46\") pod \"cilium-operator-69b677f97c-pccc7\" (UID: \"193bdd93-1e36-4c1e-ba1b-d603917d4881\") " pod="kube-system/cilium-operator-69b677f97c-pccc7" Oct 2 19:58:57.308821 kubelet[2093]: I1002 19:58:57.308496 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/193bdd93-1e36-4c1e-ba1b-d603917d4881-cilium-config-path\") pod \"cilium-operator-69b677f97c-pccc7\" (UID: \"193bdd93-1e36-4c1e-ba1b-d603917d4881\") " pod="kube-system/cilium-operator-69b677f97c-pccc7" Oct 2 19:58:57.362177 kubelet[2093]: E1002 19:58:57.362132 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:57.409772 kubelet[2093]: I1002 19:58:57.409734 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-run\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.409963 kubelet[2093]: I1002 19:58:57.409789 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-bpf-maps\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.409963 kubelet[2093]: I1002 19:58:57.409821 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cni-path\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.409963 kubelet[2093]: I1002 19:58:57.409847 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-xtables-lock\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.409963 kubelet[2093]: I1002 19:58:57.409872 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-hostproc\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.409963 kubelet[2093]: I1002 19:58:57.409901 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-clustermesh-secrets\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.409963 kubelet[2093]: I1002 19:58:57.409931 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-host-proc-sys-kernel\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.410404 kubelet[2093]: I1002 19:58:57.409957 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-etc-cni-netd\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.410404 kubelet[2093]: I1002 19:58:57.409987 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-config-path\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.410404 kubelet[2093]: I1002 19:58:57.410019 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-ipsec-secrets\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.410404 kubelet[2093]: I1002 19:58:57.410049 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-host-proc-sys-net\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.410404 kubelet[2093]: I1002 19:58:57.410103 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q448\" (UniqueName: \"kubernetes.io/projected/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-kube-api-access-8q448\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.410404 kubelet[2093]: I1002 19:58:57.410154 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-cgroup\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.410693 kubelet[2093]: I1002 19:58:57.410188 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-lib-modules\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.410693 kubelet[2093]: I1002 19:58:57.410220 2093 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-hubble-tls\") pod \"cilium-7q9vc\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " pod="kube-system/cilium-7q9vc" Oct 2 19:58:57.545840 env[1633]: time="2023-10-02T19:58:57.545280114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-pccc7,Uid:193bdd93-1e36-4c1e-ba1b-d603917d4881,Namespace:kube-system,Attempt:0,}" Oct 2 19:58:57.569165 env[1633]: time="2023-10-02T19:58:57.569072122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:58:57.569476 env[1633]: time="2023-10-02T19:58:57.569133502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:58:57.569476 env[1633]: time="2023-10-02T19:58:57.569149486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:58:57.569476 env[1633]: time="2023-10-02T19:58:57.569441868Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321 pid=2827 runtime=io.containerd.runc.v2 Oct 2 19:58:57.595613 systemd[1]: Started cri-containerd-293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321.scope. Oct 2 19:58:57.627520 kernel: audit: type=1400 audit(1696276737.616:740): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.627644 kernel: audit: type=1400 audit(1696276737.616:741): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.628169 env[1633]: time="2023-10-02T19:58:57.628060825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7q9vc,Uid:a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf,Namespace:kube-system,Attempt:0,}" Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.638661 kernel: audit: type=1400 audit(1696276737.616:742): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.638780 kernel: audit: type=1400 audit(1696276737.616:743): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.651233 kernel: audit: type=1400 audit(1696276737.616:744): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.651332 kernel: audit: type=1400 audit(1696276737.616:745): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.651359 kernel: audit: type=1400 audit(1696276737.616:746): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.665464 env[1633]: time="2023-10-02T19:58:57.662173202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:58:57.665464 env[1633]: time="2023-10-02T19:58:57.662220233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:58:57.665464 env[1633]: time="2023-10-02T19:58:57.662236918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:58:57.665464 env[1633]: time="2023-10-02T19:58:57.662482351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130 pid=2860 runtime=io.containerd.runc.v2 Oct 2 19:58:57.670334 kernel: audit: type=1400 audit(1696276737.616:747): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.616000 audit: BPF prog-id=87 op=LOAD Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=2827 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239336565313961636539303564616130323030306233386335313132 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00014d6b0 a2=3c a3=c items=0 ppid=2827 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239336565313961636539303564616130323030306233386335313132 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.621000 audit: BPF prog-id=88 op=LOAD Oct 2 19:58:57.621000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014d9d8 a2=78 a3=c000356880 items=0 ppid=2827 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239336565313961636539303564616130323030306233386335313132 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit: BPF prog-id=89 op=LOAD Oct 2 19:58:57.626000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00014d770 a2=78 a3=c0003568c8 items=0 ppid=2827 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239336565313961636539303564616130323030306233386335313132 Oct 2 19:58:57.626000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:58:57.626000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { perfmon } for pid=2839 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit[2839]: AVC avc: denied { bpf } for pid=2839 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.626000 audit: BPF prog-id=90 op=LOAD Oct 2 19:58:57.626000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014dc30 a2=78 a3=c000356cd8 items=0 ppid=2827 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239336565313961636539303564616130323030306233386335313132 Oct 2 19:58:57.687140 systemd[1]: Started cri-containerd-bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130.scope. Oct 2 19:58:57.698250 env[1633]: time="2023-10-02T19:58:57.698167989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-pccc7,Uid:193bdd93-1e36-4c1e-ba1b-d603917d4881,Namespace:kube-system,Attempt:0,} returns sandbox id \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\"" Oct 2 19:58:57.701390 env[1633]: time="2023-10-02T19:58:57.701350451Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:58:57.709000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.709000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit: BPF prog-id=91 op=LOAD Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2860 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266623930373230313565313036313338633161323737663038306266 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2860 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266623930373230313565313036313338633161323737663038306266 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.710000 audit: BPF prog-id=92 op=LOAD Oct 2 19:58:57.710000 audit[2868]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0001854a0 items=0 ppid=2860 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266623930373230313565313036313338633161323737663038306266 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit: BPF prog-id=93 op=LOAD Oct 2 19:58:57.711000 audit[2868]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0001854e8 items=0 ppid=2860 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266623930373230313565313036313338633161323737663038306266 Oct 2 19:58:57.711000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:58:57.711000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { perfmon } for pid=2868 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit[2868]: AVC avc: denied { bpf } for pid=2868 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:58:57.711000 audit: BPF prog-id=94 op=LOAD Oct 2 19:58:57.711000 audit[2868]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0001858f8 items=0 ppid=2860 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:58:57.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266623930373230313565313036313338633161323737663038306266 Oct 2 19:58:57.731601 env[1633]: time="2023-10-02T19:58:57.731561707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7q9vc,Uid:a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\"" Oct 2 19:58:57.734926 env[1633]: time="2023-10-02T19:58:57.734829314Z" level=info msg="CreateContainer within sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:58:57.751711 env[1633]: time="2023-10-02T19:58:57.751659086Z" level=info msg="CreateContainer within sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7\"" Oct 2 19:58:57.753881 env[1633]: time="2023-10-02T19:58:57.753847315Z" level=info msg="StartContainer for \"c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7\"" Oct 2 19:58:57.774603 systemd[1]: Started cri-containerd-c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7.scope. Oct 2 19:58:57.802956 systemd[1]: cri-containerd-c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7.scope: Deactivated successfully. Oct 2 19:58:57.841606 env[1633]: time="2023-10-02T19:58:57.841548339Z" level=info msg="shim disconnected" id=c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7 Oct 2 19:58:57.841606 env[1633]: time="2023-10-02T19:58:57.841603376Z" level=warning msg="cleaning up after shim disconnected" id=c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7 namespace=k8s.io Oct 2 19:58:57.841606 env[1633]: time="2023-10-02T19:58:57.841614637Z" level=info msg="cleaning up dead shim" Oct 2 19:58:57.851917 env[1633]: time="2023-10-02T19:58:57.851803739Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:58:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2922 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:58:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:58:57.852242 env[1633]: time="2023-10-02T19:58:57.852182180Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 19:58:57.852617 env[1633]: time="2023-10-02T19:58:57.852571059Z" level=error msg="Failed to pipe stderr of container \"c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7\"" error="reading from a closed fifo" Oct 2 19:58:57.853162 env[1633]: time="2023-10-02T19:58:57.853127130Z" level=error msg="Failed to pipe stdout of container \"c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7\"" error="reading from a closed fifo" Oct 2 19:58:57.855154 env[1633]: time="2023-10-02T19:58:57.855101267Z" level=error msg="StartContainer for \"c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:58:57.855373 kubelet[2093]: E1002 19:58:57.855352 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7" Oct 2 19:58:57.855497 kubelet[2093]: E1002 19:58:57.855474 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:58:57.855497 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:58:57.855497 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:58:57.855497 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8q448,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:58:57.855844 kubelet[2093]: E1002 19:58:57.855525 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:58:57.926434 env[1633]: time="2023-10-02T19:58:57.926391533Z" level=info msg="CreateContainer within sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:58:57.968813 env[1633]: time="2023-10-02T19:58:57.968750996Z" level=info msg="CreateContainer within sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\"" Oct 2 19:58:57.970025 env[1633]: time="2023-10-02T19:58:57.969988804Z" level=info msg="StartContainer for \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\"" Oct 2 19:58:58.022975 systemd[1]: Started cri-containerd-3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0.scope. Oct 2 19:58:58.039262 systemd[1]: cri-containerd-3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0.scope: Deactivated successfully. Oct 2 19:58:58.054231 env[1633]: time="2023-10-02T19:58:58.054170989Z" level=info msg="shim disconnected" id=3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0 Oct 2 19:58:58.054231 env[1633]: time="2023-10-02T19:58:58.054228932Z" level=warning msg="cleaning up after shim disconnected" id=3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0 namespace=k8s.io Oct 2 19:58:58.054548 env[1633]: time="2023-10-02T19:58:58.054240008Z" level=info msg="cleaning up dead shim" Oct 2 19:58:58.065763 env[1633]: time="2023-10-02T19:58:58.065628246Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:58:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2958 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:58:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:58:58.066793 env[1633]: time="2023-10-02T19:58:58.066725887Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 19:58:58.067246 env[1633]: time="2023-10-02T19:58:58.067197743Z" level=error msg="Failed to pipe stdout of container \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\"" error="reading from a closed fifo" Oct 2 19:58:58.068393 env[1633]: time="2023-10-02T19:58:58.068141180Z" level=error msg="Failed to pipe stderr of container \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\"" error="reading from a closed fifo" Oct 2 19:58:58.070844 env[1633]: time="2023-10-02T19:58:58.070795984Z" level=error msg="StartContainer for \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:58:58.071119 kubelet[2093]: E1002 19:58:58.071040 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0" Oct 2 19:58:58.071253 kubelet[2093]: E1002 19:58:58.071204 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:58:58.071253 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:58:58.071253 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:58:58.071253 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8q448,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:58:58.071531 kubelet[2093]: E1002 19:58:58.071263 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:58:58.301100 kubelet[2093]: E1002 19:58:58.301048 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:58.942279 kubelet[2093]: I1002 19:58:58.942184 2093 scope.go:115] "RemoveContainer" containerID="c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7" Oct 2 19:58:58.944209 kubelet[2093]: I1002 19:58:58.943879 2093 scope.go:115] "RemoveContainer" containerID="c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7" Oct 2 19:58:58.951811 env[1633]: time="2023-10-02T19:58:58.951769070Z" level=info msg="RemoveContainer for \"c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7\"" Oct 2 19:58:58.955905 env[1633]: time="2023-10-02T19:58:58.955872531Z" level=info msg="RemoveContainer for \"c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7\" returns successfully" Oct 2 19:58:58.956300 env[1633]: time="2023-10-02T19:58:58.956271186Z" level=info msg="RemoveContainer for \"c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7\"" Oct 2 19:58:58.956439 env[1633]: time="2023-10-02T19:58:58.956422647Z" level=info msg="RemoveContainer for \"c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7\" returns successfully" Oct 2 19:58:58.957320 kubelet[2093]: E1002 19:58:58.957286 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf)\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:58:59.098717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625902870.mount: Deactivated successfully. Oct 2 19:58:59.302025 kubelet[2093]: E1002 19:58:59.301960 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:59.947692 kubelet[2093]: E1002 19:58:59.947658 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf)\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:59:00.156196 env[1633]: time="2023-10-02T19:59:00.156141998Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:00.158735 env[1633]: time="2023-10-02T19:59:00.158686203Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:00.165968 env[1633]: time="2023-10-02T19:59:00.165915724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:00.168745 env[1633]: time="2023-10-02T19:59:00.168515137Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 19:59:00.175348 env[1633]: time="2023-10-02T19:59:00.175029059Z" level=info msg="CreateContainer within sandbox \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:59:00.199260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993876957.mount: Deactivated successfully. Oct 2 19:59:00.213223 env[1633]: time="2023-10-02T19:59:00.213173450Z" level=info msg="CreateContainer within sandbox \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\"" Oct 2 19:59:00.214116 env[1633]: time="2023-10-02T19:59:00.214063948Z" level=info msg="StartContainer for \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\"" Oct 2 19:59:00.251921 systemd[1]: Started cri-containerd-a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1.scope. Oct 2 19:59:00.295655 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:59:00.295879 kernel: audit: type=1400 audit(1696276740.286:776): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.307957 kernel: audit: type=1400 audit(1696276740.286:777): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.308167 kernel: audit: type=1400 audit(1696276740.286:778): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.308462 kubelet[2093]: E1002 19:59:00.308190 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:00.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.315279 kernel: audit: type=1400 audit(1696276740.286:779): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.325445 kernel: audit: type=1400 audit(1696276740.286:780): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.325737 kernel: audit: type=1400 audit(1696276740.286:781): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.340255 kernel: audit: type=1400 audit(1696276740.286:782): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.340406 kernel: audit: type=1400 audit(1696276740.286:783): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.286000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.286000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.352297 kernel: audit: type=1400 audit(1696276740.286:784): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.357757 kernel: audit: type=1400 audit(1696276740.287:785): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.287000 audit: BPF prog-id=95 op=LOAD Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=2827 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:00.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133623438663364393866666537306563356663613339363537663631 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=2827 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:00.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133623438663364393866666537306563356663613339363537663631 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.288000 audit: BPF prog-id=96 op=LOAD Oct 2 19:59:00.288000 audit[2980]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014d9d8 a2=78 a3=c00020a9b0 items=0 ppid=2827 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:00.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133623438663364393866666537306563356663613339363537663631 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit: BPF prog-id=97 op=LOAD Oct 2 19:59:00.294000 audit[2980]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00014d770 a2=78 a3=c00020a9f8 items=0 ppid=2827 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:00.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133623438663364393866666537306563356663613339363537663631 Oct 2 19:59:00.294000 audit: BPF prog-id=97 op=UNLOAD Oct 2 19:59:00.294000 audit: BPF prog-id=96 op=UNLOAD Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { perfmon } for pid=2980 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit[2980]: AVC avc: denied { bpf } for pid=2980 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:00.294000 audit: BPF prog-id=98 op=LOAD Oct 2 19:59:00.294000 audit[2980]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014dc30 a2=78 a3=c00020ae08 items=0 ppid=2827 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:00.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133623438663364393866666537306563356663613339363537663631 Oct 2 19:59:00.365474 env[1633]: time="2023-10-02T19:59:00.365422806Z" level=info msg="StartContainer for \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\" returns successfully" Oct 2 19:59:00.413000 audit[2992]: AVC avc: denied { map_create } for pid=2992 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c67,c940 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c67,c940 tclass=bpf permissive=0 Oct 2 19:59:00.413000 audit[2992]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0006497d0 a2=48 a3=c0006497c0 items=0 ppid=2827 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c67,c940 key=(null) Oct 2 19:59:00.413000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:59:00.952677 kubelet[2093]: W1002 19:59:00.952580 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda302d3b4_9c9c_4d55_88c6_b98b9d56dbdf.slice/cri-containerd-c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7.scope WatchSource:0}: container "c35e7298f9083f1a17ddecfd4ff8904419225a437e6620a5add1b8bbcf0868e7" in namespace "k8s.io": not found Oct 2 19:59:01.308908 kubelet[2093]: E1002 19:59:01.308835 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:02.309417 kubelet[2093]: E1002 19:59:02.309355 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:02.363600 kubelet[2093]: E1002 19:59:02.363567 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:03.310457 kubelet[2093]: E1002 19:59:03.310402 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:04.061947 kubelet[2093]: W1002 19:59:04.061900 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda302d3b4_9c9c_4d55_88c6_b98b9d56dbdf.slice/cri-containerd-3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0.scope WatchSource:0}: task 3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0 not found: not found Oct 2 19:59:04.311220 kubelet[2093]: E1002 19:59:04.311167 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:05.312019 kubelet[2093]: E1002 19:59:05.311970 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:06.313150 kubelet[2093]: E1002 19:59:06.313099 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.144833 kubelet[2093]: E1002 19:59:07.144784 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.313893 kubelet[2093]: E1002 19:59:07.313843 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.364861 kubelet[2093]: E1002 19:59:07.364830 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:08.314565 kubelet[2093]: E1002 19:59:08.314513 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:09.315111 kubelet[2093]: E1002 19:59:09.315052 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:10.315980 kubelet[2093]: E1002 19:59:10.315929 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:11.316851 kubelet[2093]: E1002 19:59:11.316797 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:12.317323 kubelet[2093]: E1002 19:59:12.317270 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:12.366430 kubelet[2093]: E1002 19:59:12.366390 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:12.434307 env[1633]: time="2023-10-02T19:59:12.434264447Z" level=info msg="CreateContainer within sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:59:12.449212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958857535.mount: Deactivated successfully. Oct 2 19:59:12.457256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902283890.mount: Deactivated successfully. Oct 2 19:59:12.462683 env[1633]: time="2023-10-02T19:59:12.462638356Z" level=info msg="CreateContainer within sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\"" Oct 2 19:59:12.463678 env[1633]: time="2023-10-02T19:59:12.463639195Z" level=info msg="StartContainer for \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\"" Oct 2 19:59:12.528009 systemd[1]: Started cri-containerd-fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8.scope. Oct 2 19:59:12.548108 systemd[1]: cri-containerd-fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8.scope: Deactivated successfully. Oct 2 19:59:12.773052 env[1633]: time="2023-10-02T19:59:12.772988947Z" level=info msg="shim disconnected" id=fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8 Oct 2 19:59:12.773052 env[1633]: time="2023-10-02T19:59:12.773055584Z" level=warning msg="cleaning up after shim disconnected" id=fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8 namespace=k8s.io Oct 2 19:59:12.773052 env[1633]: time="2023-10-02T19:59:12.773068805Z" level=info msg="cleaning up dead shim" Oct 2 19:59:12.783604 env[1633]: time="2023-10-02T19:59:12.783526650Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3037 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:12.783924 env[1633]: time="2023-10-02T19:59:12.783857604Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:59:12.787186 env[1633]: time="2023-10-02T19:59:12.787114682Z" level=error msg="Failed to pipe stdout of container \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\"" error="reading from a closed fifo" Oct 2 19:59:12.787372 env[1633]: time="2023-10-02T19:59:12.787212855Z" level=error msg="Failed to pipe stderr of container \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\"" error="reading from a closed fifo" Oct 2 19:59:12.789642 env[1633]: time="2023-10-02T19:59:12.789586789Z" level=error msg="StartContainer for \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:12.789916 kubelet[2093]: E1002 19:59:12.789893 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8" Oct 2 19:59:12.790127 kubelet[2093]: E1002 19:59:12.790092 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:12.790127 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:12.790127 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:59:12.790127 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8q448,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:12.790560 kubelet[2093]: E1002 19:59:12.790150 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:59:12.976065 kubelet[2093]: I1002 19:59:12.976033 2093 scope.go:115] "RemoveContainer" containerID="3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0" Oct 2 19:59:12.976570 kubelet[2093]: I1002 19:59:12.976545 2093 scope.go:115] "RemoveContainer" containerID="3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0" Oct 2 19:59:12.983707 env[1633]: time="2023-10-02T19:59:12.981230125Z" level=info msg="RemoveContainer for \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\"" Oct 2 19:59:12.987399 env[1633]: time="2023-10-02T19:59:12.987308299Z" level=info msg="RemoveContainer for \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\"" Oct 2 19:59:12.988479 env[1633]: time="2023-10-02T19:59:12.987764397Z" level=error msg="RemoveContainer for \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\" failed" error="failed to set removing state for container \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\": container is already in removing state" Oct 2 19:59:12.990135 kubelet[2093]: E1002 19:59:12.989775 2093 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\": container is already in removing state" containerID="3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0" Oct 2 19:59:12.994420 kubelet[2093]: E1002 19:59:12.990323 2093 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0": container is already in removing state; Skipping pod "cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf)" Oct 2 19:59:12.994420 kubelet[2093]: E1002 19:59:12.991163 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf)\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:59:13.001998 env[1633]: time="2023-10-02T19:59:13.001771809Z" level=info msg="RemoveContainer for \"3df317c181554e2ef6f4a2acaa0e1bb90b335c718f69f4d7489b6ef83c6906b0\" returns successfully" Oct 2 19:59:13.318109 kubelet[2093]: E1002 19:59:13.318041 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:13.446155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8-rootfs.mount: Deactivated successfully. Oct 2 19:59:14.318534 kubelet[2093]: E1002 19:59:14.318484 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:15.319695 kubelet[2093]: E1002 19:59:15.319641 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:15.879875 kubelet[2093]: W1002 19:59:15.879833 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda302d3b4_9c9c_4d55_88c6_b98b9d56dbdf.slice/cri-containerd-fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8.scope WatchSource:0}: task fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8 not found: not found Oct 2 19:59:16.320001 kubelet[2093]: E1002 19:59:16.319946 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:17.320980 kubelet[2093]: E1002 19:59:17.320930 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:17.367557 kubelet[2093]: E1002 19:59:17.367501 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:18.321570 kubelet[2093]: E1002 19:59:18.321518 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:19.321779 kubelet[2093]: E1002 19:59:19.321724 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:20.322756 kubelet[2093]: E1002 19:59:20.322700 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:21.323184 kubelet[2093]: E1002 19:59:21.323147 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:22.323824 kubelet[2093]: E1002 19:59:22.323769 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:22.369433 kubelet[2093]: E1002 19:59:22.369359 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:23.324431 kubelet[2093]: E1002 19:59:23.324382 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:24.325356 kubelet[2093]: E1002 19:59:24.325305 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:24.432051 kubelet[2093]: E1002 19:59:24.431918 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf)\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:59:25.325834 kubelet[2093]: E1002 19:59:25.325784 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:26.326625 kubelet[2093]: E1002 19:59:26.326576 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.145016 kubelet[2093]: E1002 19:59:27.144977 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.184502 env[1633]: time="2023-10-02T19:59:27.184400535Z" level=info msg="StopPodSandbox for \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\"" Oct 2 19:59:27.184935 env[1633]: time="2023-10-02T19:59:27.184565037Z" level=info msg="TearDown network for sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" successfully" Oct 2 19:59:27.184935 env[1633]: time="2023-10-02T19:59:27.184615534Z" level=info msg="StopPodSandbox for \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" returns successfully" Oct 2 19:59:27.186638 env[1633]: time="2023-10-02T19:59:27.185252983Z" level=info msg="RemovePodSandbox for \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\"" Oct 2 19:59:27.186763 env[1633]: time="2023-10-02T19:59:27.186654274Z" level=info msg="Forcibly stopping sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\"" Oct 2 19:59:27.186836 env[1633]: time="2023-10-02T19:59:27.186753861Z" level=info msg="TearDown network for sandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" successfully" Oct 2 19:59:27.198426 env[1633]: time="2023-10-02T19:59:27.198377163Z" level=info msg="RemovePodSandbox \"2d306b2fcd615eb86dacbfb4a381c3885d31f2854b4e4a35d69d58c4f6556b9b\" returns successfully" Oct 2 19:59:27.326829 kubelet[2093]: E1002 19:59:27.326740 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.370518 kubelet[2093]: E1002 19:59:27.370479 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:28.327740 kubelet[2093]: E1002 19:59:28.327687 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:29.328897 kubelet[2093]: E1002 19:59:29.328846 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:30.329752 kubelet[2093]: E1002 19:59:30.329700 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.330652 kubelet[2093]: E1002 19:59:31.330599 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:32.331062 kubelet[2093]: E1002 19:59:32.331022 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:32.372046 kubelet[2093]: E1002 19:59:32.372005 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:33.332137 kubelet[2093]: E1002 19:59:33.332089 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:34.332889 kubelet[2093]: E1002 19:59:34.332847 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:35.333626 kubelet[2093]: E1002 19:59:35.333573 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:36.334554 kubelet[2093]: E1002 19:59:36.334499 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:36.434544 env[1633]: time="2023-10-02T19:59:36.434495836Z" level=info msg="CreateContainer within sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:59:36.457547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613357682.mount: Deactivated successfully. Oct 2 19:59:36.469525 env[1633]: time="2023-10-02T19:59:36.469464528Z" level=info msg="CreateContainer within sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e\"" Oct 2 19:59:36.470547 env[1633]: time="2023-10-02T19:59:36.470508019Z" level=info msg="StartContainer for \"8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e\"" Oct 2 19:59:36.502874 systemd[1]: Started cri-containerd-8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e.scope. Oct 2 19:59:36.522293 systemd[1]: cri-containerd-8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e.scope: Deactivated successfully. Oct 2 19:59:36.542665 env[1633]: time="2023-10-02T19:59:36.542520368Z" level=info msg="shim disconnected" id=8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e Oct 2 19:59:36.542896 env[1633]: time="2023-10-02T19:59:36.542666423Z" level=warning msg="cleaning up after shim disconnected" id=8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e namespace=k8s.io Oct 2 19:59:36.542896 env[1633]: time="2023-10-02T19:59:36.542683102Z" level=info msg="cleaning up dead shim" Oct 2 19:59:36.558671 env[1633]: time="2023-10-02T19:59:36.558609338Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3079 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:36.559454 env[1633]: time="2023-10-02T19:59:36.559070039Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:59:36.559913 env[1633]: time="2023-10-02T19:59:36.559725727Z" level=error msg="Failed to pipe stdout of container \"8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e\"" error="reading from a closed fifo" Oct 2 19:59:36.560007 env[1633]: time="2023-10-02T19:59:36.559946201Z" level=error msg="Failed to pipe stderr of container \"8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e\"" error="reading from a closed fifo" Oct 2 19:59:36.562380 env[1633]: time="2023-10-02T19:59:36.562333522Z" level=error msg="StartContainer for \"8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:36.562777 kubelet[2093]: E1002 19:59:36.562755 2093 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e" Oct 2 19:59:36.562910 kubelet[2093]: E1002 19:59:36.562883 2093 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:36.562910 kubelet[2093]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:36.562910 kubelet[2093]: rm /hostbin/cilium-mount Oct 2 19:59:36.562910 kubelet[2093]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8q448,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:36.563147 kubelet[2093]: E1002 19:59:36.562936 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:59:37.024495 kubelet[2093]: I1002 19:59:37.024464 2093 scope.go:115] "RemoveContainer" containerID="fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8" Oct 2 19:59:37.024873 kubelet[2093]: I1002 19:59:37.024846 2093 scope.go:115] "RemoveContainer" containerID="fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8" Oct 2 19:59:37.026130 env[1633]: time="2023-10-02T19:59:37.026091718Z" level=info msg="RemoveContainer for \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\"" Oct 2 19:59:37.026683 env[1633]: time="2023-10-02T19:59:37.026650413Z" level=info msg="RemoveContainer for \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\"" Oct 2 19:59:37.026779 env[1633]: time="2023-10-02T19:59:37.026745420Z" level=error msg="RemoveContainer for \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\" failed" error="failed to set removing state for container \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\": container is already in removing state" Oct 2 19:59:37.026952 kubelet[2093]: E1002 19:59:37.026932 2093 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\": container is already in removing state" containerID="fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8" Oct 2 19:59:37.027047 kubelet[2093]: E1002 19:59:37.026968 2093 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8": container is already in removing state; Skipping pod "cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf)" Oct 2 19:59:37.027329 kubelet[2093]: E1002 19:59:37.027285 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf)\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:59:37.030179 env[1633]: time="2023-10-02T19:59:37.030134488Z" level=info msg="RemoveContainer for \"fe50da1a2021d97c1fdc797fd23ade532cc08711605aa7a32041d190056857b8\" returns successfully" Oct 2 19:59:37.334977 kubelet[2093]: E1002 19:59:37.334850 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:37.373150 kubelet[2093]: E1002 19:59:37.373114 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:37.451149 systemd[1]: run-containerd-runc-k8s.io-8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e-runc.lfXTKV.mount: Deactivated successfully. Oct 2 19:59:37.451273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e-rootfs.mount: Deactivated successfully. Oct 2 19:59:38.335557 kubelet[2093]: E1002 19:59:38.335507 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:39.335717 kubelet[2093]: E1002 19:59:39.335661 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:39.648055 kubelet[2093]: W1002 19:59:39.647935 2093 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda302d3b4_9c9c_4d55_88c6_b98b9d56dbdf.slice/cri-containerd-8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e.scope WatchSource:0}: task 8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e not found: not found Oct 2 19:59:40.335981 kubelet[2093]: E1002 19:59:40.335940 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:41.336300 kubelet[2093]: E1002 19:59:41.336246 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:42.337385 kubelet[2093]: E1002 19:59:42.337331 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:42.374757 kubelet[2093]: E1002 19:59:42.374727 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:43.337959 kubelet[2093]: E1002 19:59:43.337907 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:44.338646 kubelet[2093]: E1002 19:59:44.338595 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:45.339450 kubelet[2093]: E1002 19:59:45.339398 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:46.340441 kubelet[2093]: E1002 19:59:46.340387 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.144772 kubelet[2093]: E1002 19:59:47.144652 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.341021 kubelet[2093]: E1002 19:59:47.340969 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.375479 kubelet[2093]: E1002 19:59:47.375449 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:48.342000 kubelet[2093]: E1002 19:59:48.341955 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:49.342373 kubelet[2093]: E1002 19:59:49.342321 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:50.342447 kubelet[2093]: E1002 19:59:50.342413 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:51.343311 kubelet[2093]: E1002 19:59:51.343259 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:51.432120 kubelet[2093]: E1002 19:59:51.431657 2093 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7q9vc_kube-system(a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf)\"" pod="kube-system/cilium-7q9vc" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf Oct 2 19:59:52.344345 kubelet[2093]: E1002 19:59:52.344238 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:52.377345 kubelet[2093]: E1002 19:59:52.377302 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:53.345551 kubelet[2093]: E1002 19:59:53.345501 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:54.346356 kubelet[2093]: E1002 19:59:54.346303 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:55.347400 kubelet[2093]: E1002 19:59:55.347344 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:56.348293 kubelet[2093]: E1002 19:59:56.348245 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:57.348885 kubelet[2093]: E1002 19:59:57.348835 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:57.379174 kubelet[2093]: E1002 19:59:57.379142 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:58.349063 kubelet[2093]: E1002 19:59:58.349012 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:58.725954 env[1633]: time="2023-10-02T19:59:58.719208004Z" level=info msg="StopPodSandbox for \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\"" Oct 2 19:59:58.725954 env[1633]: time="2023-10-02T19:59:58.719526812Z" level=info msg="Container to stop \"8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:59:58.723130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130-shm.mount: Deactivated successfully. Oct 2 19:59:58.734729 systemd[1]: cri-containerd-bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130.scope: Deactivated successfully. Oct 2 19:59:58.733000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:59:58.737002 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:59:58.737181 kernel: audit: type=1334 audit(1696276798.733:795): prog-id=91 op=UNLOAD Oct 2 19:59:58.744000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:59:58.748358 kernel: audit: type=1334 audit(1696276798.744:796): prog-id=94 op=UNLOAD Oct 2 19:59:58.771877 env[1633]: time="2023-10-02T19:59:58.771804647Z" level=info msg="StopContainer for \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\" with timeout 30 (s)" Oct 2 19:59:58.774248 env[1633]: time="2023-10-02T19:59:58.774205629Z" level=info msg="Stop container \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\" with signal terminated" Oct 2 19:59:58.793716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130-rootfs.mount: Deactivated successfully. Oct 2 19:59:58.799295 systemd[1]: cri-containerd-a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1.scope: Deactivated successfully. Oct 2 19:59:58.798000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:59:58.802292 kernel: audit: type=1334 audit(1696276798.798:797): prog-id=95 op=UNLOAD Oct 2 19:59:58.801000 audit: BPF prog-id=98 op=UNLOAD Oct 2 19:59:58.805150 kernel: audit: type=1334 audit(1696276798.801:798): prog-id=98 op=UNLOAD Oct 2 19:59:58.825173 env[1633]: time="2023-10-02T19:59:58.824946656Z" level=info msg="shim disconnected" id=bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130 Oct 2 19:59:58.825173 env[1633]: time="2023-10-02T19:59:58.825016579Z" level=warning msg="cleaning up after shim disconnected" id=bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130 namespace=k8s.io Oct 2 19:59:58.825173 env[1633]: time="2023-10-02T19:59:58.825030577Z" level=info msg="cleaning up dead shim" Oct 2 19:59:58.852951 env[1633]: time="2023-10-02T19:59:58.852861822Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3126 runtime=io.containerd.runc.v2\n" Oct 2 19:59:58.853955 env[1633]: time="2023-10-02T19:59:58.853883228Z" level=info msg="TearDown network for sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" successfully" Oct 2 19:59:58.854067 env[1633]: time="2023-10-02T19:59:58.853953592Z" level=info msg="StopPodSandbox for \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" returns successfully" Oct 2 19:59:58.871625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1-rootfs.mount: Deactivated successfully. Oct 2 19:59:58.891194 env[1633]: time="2023-10-02T19:59:58.891050031Z" level=info msg="shim disconnected" id=a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1 Oct 2 19:59:58.891455 env[1633]: time="2023-10-02T19:59:58.891198234Z" level=warning msg="cleaning up after shim disconnected" id=a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1 namespace=k8s.io Oct 2 19:59:58.891455 env[1633]: time="2023-10-02T19:59:58.891213226Z" level=info msg="cleaning up dead shim" Oct 2 19:59:58.901235 env[1633]: time="2023-10-02T19:59:58.901186311Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3146 runtime=io.containerd.runc.v2\n" Oct 2 19:59:58.903339 env[1633]: time="2023-10-02T19:59:58.903298675Z" level=info msg="StopContainer for \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\" returns successfully" Oct 2 19:59:58.904199 env[1633]: time="2023-10-02T19:59:58.904155798Z" level=info msg="StopPodSandbox for \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\"" Oct 2 19:59:58.904507 env[1633]: time="2023-10-02T19:59:58.904222195Z" level=info msg="Container to stop \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:59:58.907389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321-shm.mount: Deactivated successfully. Oct 2 19:59:58.916424 systemd[1]: cri-containerd-293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321.scope: Deactivated successfully. Oct 2 19:59:58.915000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:59:58.920107 kernel: audit: type=1334 audit(1696276798.915:799): prog-id=87 op=UNLOAD Oct 2 19:59:58.920000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:59:58.924169 kernel: audit: type=1334 audit(1696276798.920:800): prog-id=90 op=UNLOAD Oct 2 19:59:58.948447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321-rootfs.mount: Deactivated successfully. Oct 2 19:59:58.961646 env[1633]: time="2023-10-02T19:59:58.961597342Z" level=info msg="shim disconnected" id=293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321 Oct 2 19:59:58.961646 env[1633]: time="2023-10-02T19:59:58.961645596Z" level=warning msg="cleaning up after shim disconnected" id=293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321 namespace=k8s.io Oct 2 19:59:58.961646 env[1633]: time="2023-10-02T19:59:58.961657403Z" level=info msg="cleaning up dead shim" Oct 2 19:59:58.973762 env[1633]: time="2023-10-02T19:59:58.973710975Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3178 runtime=io.containerd.runc.v2\n" Oct 2 19:59:58.974093 env[1633]: time="2023-10-02T19:59:58.974046344Z" level=info msg="TearDown network for sandbox \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\" successfully" Oct 2 19:59:58.974200 env[1633]: time="2023-10-02T19:59:58.974092391Z" level=info msg="StopPodSandbox for \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\" returns successfully" Oct 2 19:59:59.028428 kubelet[2093]: I1002 19:59:59.028389 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-hostproc\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028428 kubelet[2093]: I1002 19:59:59.028442 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-host-proc-sys-net\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028718 kubelet[2093]: I1002 19:59:59.028468 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-cgroup\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028718 kubelet[2093]: I1002 19:59:59.028490 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-lib-modules\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028718 kubelet[2093]: I1002 19:59:59.028511 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-run\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028718 kubelet[2093]: I1002 19:59:59.028532 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cni-path\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028718 kubelet[2093]: I1002 19:59:59.028555 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-host-proc-sys-kernel\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028718 kubelet[2093]: I1002 19:59:59.028589 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-ipsec-secrets\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028996 kubelet[2093]: I1002 19:59:59.028615 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-hubble-tls\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028996 kubelet[2093]: I1002 19:59:59.028645 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-bpf-maps\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028996 kubelet[2093]: I1002 19:59:59.028697 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-clustermesh-secrets\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028996 kubelet[2093]: I1002 19:59:59.028726 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q448\" (UniqueName: \"kubernetes.io/projected/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-kube-api-access-8q448\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028996 kubelet[2093]: I1002 19:59:59.028763 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-etc-cni-netd\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.028996 kubelet[2093]: I1002 19:59:59.028791 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-xtables-lock\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.029334 kubelet[2093]: I1002 19:59:59.028824 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-config-path\") pod \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\" (UID: \"a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf\") " Oct 2 19:59:59.029334 kubelet[2093]: I1002 19:59:59.029056 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.029334 kubelet[2093]: I1002 19:59:59.029123 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-hostproc" (OuterVolumeSpecName: "hostproc") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.029334 kubelet[2093]: I1002 19:59:59.029147 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.029334 kubelet[2093]: I1002 19:59:59.029170 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.029856 kubelet[2093]: I1002 19:59:59.029190 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.029856 kubelet[2093]: I1002 19:59:59.029211 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.029856 kubelet[2093]: I1002 19:59:59.029232 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cni-path" (OuterVolumeSpecName: "cni-path") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.030103 kubelet[2093]: W1002 19:59:59.030027 2093 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:59:59.035363 kubelet[2093]: I1002 19:59:59.035316 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:59:59.035907 kubelet[2093]: I1002 19:59:59.035398 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.035907 kubelet[2093]: I1002 19:59:59.035422 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.043449 kubelet[2093]: I1002 19:59:59.043401 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:59.043606 kubelet[2093]: I1002 19:59:59.043571 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:59:59.044374 kubelet[2093]: I1002 19:59:59.044345 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-kube-api-access-8q448" (OuterVolumeSpecName: "kube-api-access-8q448") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "kube-api-access-8q448". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:59.044698 kubelet[2093]: I1002 19:59:59.044676 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:59:59.047829 kubelet[2093]: I1002 19:59:59.047783 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf" (UID: "a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:59.077461 kubelet[2093]: I1002 19:59:59.077289 2093 scope.go:115] "RemoveContainer" containerID="a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1" Oct 2 19:59:59.082110 env[1633]: time="2023-10-02T19:59:59.081254881Z" level=info msg="RemoveContainer for \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\"" Oct 2 19:59:59.093985 env[1633]: time="2023-10-02T19:59:59.093924594Z" level=info msg="RemoveContainer for \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\" returns successfully" Oct 2 19:59:59.094264 kubelet[2093]: I1002 19:59:59.094234 2093 scope.go:115] "RemoveContainer" containerID="a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1" Oct 2 19:59:59.094788 env[1633]: time="2023-10-02T19:59:59.094618154Z" level=error msg="ContainerStatus for \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\": not found" Oct 2 19:59:59.099847 kubelet[2093]: E1002 19:59:59.099325 2093 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\": not found" containerID="a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1" Oct 2 19:59:59.100458 kubelet[2093]: I1002 19:59:59.100426 2093 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1} err="failed to get container status \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3b48f3d98ffe70ec5fca39657f61864232c87c3df623d691a0ccaa724395ec1\": not found" Oct 2 19:59:59.100458 kubelet[2093]: I1002 19:59:59.100464 2093 scope.go:115] "RemoveContainer" containerID="8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e" Oct 2 19:59:59.103981 systemd[1]: Removed slice kubepods-burstable-poda302d3b4_9c9c_4d55_88c6_b98b9d56dbdf.slice. Oct 2 19:59:59.107964 env[1633]: time="2023-10-02T19:59:59.107705211Z" level=info msg="RemoveContainer for \"8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e\"" Oct 2 19:59:59.111548 env[1633]: time="2023-10-02T19:59:59.111499489Z" level=info msg="RemoveContainer for \"8ae587663f7e28077c92c047b6fcd7863522d0b8314fed00988125697d3cc47e\" returns successfully" Oct 2 19:59:59.129255 kubelet[2093]: I1002 19:59:59.129221 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/193bdd93-1e36-4c1e-ba1b-d603917d4881-cilium-config-path\") pod \"193bdd93-1e36-4c1e-ba1b-d603917d4881\" (UID: \"193bdd93-1e36-4c1e-ba1b-d603917d4881\") " Oct 2 19:59:59.129557 kubelet[2093]: W1002 19:59:59.129439 2093 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/193bdd93-1e36-4c1e-ba1b-d603917d4881/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:59:59.129671 kubelet[2093]: I1002 19:59:59.129658 2093 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxd46\" (UniqueName: \"kubernetes.io/projected/193bdd93-1e36-4c1e-ba1b-d603917d4881-kube-api-access-bxd46\") pod \"193bdd93-1e36-4c1e-ba1b-d603917d4881\" (UID: \"193bdd93-1e36-4c1e-ba1b-d603917d4881\") " Oct 2 19:59:59.130337 kubelet[2093]: I1002 19:59:59.130317 2093 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-host-proc-sys-kernel\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.130467 kubelet[2093]: I1002 19:59:59.130457 2093 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-ipsec-secrets\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.130547 kubelet[2093]: I1002 19:59:59.130539 2093 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-run\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.130623 kubelet[2093]: I1002 19:59:59.130615 2093 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cni-path\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.130698 kubelet[2093]: I1002 19:59:59.130691 2093 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-clustermesh-secrets\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.130936 kubelet[2093]: I1002 19:59:59.130924 2093 reconciler.go:399] "Volume detached for volume \"kube-api-access-8q448\" (UniqueName: \"kubernetes.io/projected/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-kube-api-access-8q448\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.131084 kubelet[2093]: I1002 19:59:59.131025 2093 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-hubble-tls\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.131181 kubelet[2093]: I1002 19:59:59.131170 2093 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-bpf-maps\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.131254 kubelet[2093]: I1002 19:59:59.131246 2093 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-config-path\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.131333 kubelet[2093]: I1002 19:59:59.131325 2093 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-etc-cni-netd\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.131408 kubelet[2093]: I1002 19:59:59.131401 2093 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-xtables-lock\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.131518 kubelet[2093]: I1002 19:59:59.131508 2093 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-host-proc-sys-net\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.131601 kubelet[2093]: I1002 19:59:59.131593 2093 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-cilium-cgroup\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.131676 kubelet[2093]: I1002 19:59:59.131668 2093 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-lib-modules\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.131746 kubelet[2093]: I1002 19:59:59.131739 2093 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf-hostproc\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.133016 kubelet[2093]: I1002 19:59:59.132982 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/193bdd93-1e36-4c1e-ba1b-d603917d4881-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "193bdd93-1e36-4c1e-ba1b-d603917d4881" (UID: "193bdd93-1e36-4c1e-ba1b-d603917d4881"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:59:59.135538 kubelet[2093]: I1002 19:59:59.135498 2093 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193bdd93-1e36-4c1e-ba1b-d603917d4881-kube-api-access-bxd46" (OuterVolumeSpecName: "kube-api-access-bxd46") pod "193bdd93-1e36-4c1e-ba1b-d603917d4881" (UID: "193bdd93-1e36-4c1e-ba1b-d603917d4881"). InnerVolumeSpecName "kube-api-access-bxd46". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:59.232003 kubelet[2093]: I1002 19:59:59.231954 2093 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/193bdd93-1e36-4c1e-ba1b-d603917d4881-cilium-config-path\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.232003 kubelet[2093]: I1002 19:59:59.232000 2093 reconciler.go:399] "Volume detached for volume \"kube-api-access-bxd46\" (UniqueName: \"kubernetes.io/projected/193bdd93-1e36-4c1e-ba1b-d603917d4881-kube-api-access-bxd46\") on node \"172.31.18.171\" DevicePath \"\"" Oct 2 19:59:59.350263 kubelet[2093]: E1002 19:59:59.350146 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:59.385469 systemd[1]: Removed slice kubepods-besteffort-pod193bdd93_1e36_4c1e_ba1b_d603917d4881.slice. Oct 2 19:59:59.437493 kubelet[2093]: I1002 19:59:59.437456 2093 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=193bdd93-1e36-4c1e-ba1b-d603917d4881 path="/var/lib/kubelet/pods/193bdd93-1e36-4c1e-ba1b-d603917d4881/volumes" Oct 2 19:59:59.438014 kubelet[2093]: I1002 19:59:59.437994 2093 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf path="/var/lib/kubelet/pods/a302d3b4-9c9c-4d55-88c6-b98b9d56dbdf/volumes" Oct 2 19:59:59.721284 systemd[1]: var-lib-kubelet-pods-a302d3b4\x2d9c9c\x2d4d55\x2d88c6\x2db98b9d56dbdf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8q448.mount: Deactivated successfully. Oct 2 19:59:59.721419 systemd[1]: var-lib-kubelet-pods-a302d3b4\x2d9c9c\x2d4d55\x2d88c6\x2db98b9d56dbdf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:59:59.721498 systemd[1]: var-lib-kubelet-pods-a302d3b4\x2d9c9c\x2d4d55\x2d88c6\x2db98b9d56dbdf-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:59:59.721573 systemd[1]: var-lib-kubelet-pods-a302d3b4\x2d9c9c\x2d4d55\x2d88c6\x2db98b9d56dbdf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:59:59.721655 systemd[1]: var-lib-kubelet-pods-193bdd93\x2d1e36\x2d4c1e\x2dba1b\x2dd603917d4881-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbxd46.mount: Deactivated successfully. Oct 2 20:00:00.351338 kubelet[2093]: E1002 20:00:00.351278 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:01.351484 kubelet[2093]: E1002 20:00:01.351424 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:02.352238 kubelet[2093]: E1002 20:00:02.352183 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:02.380093 kubelet[2093]: E1002 20:00:02.380043 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:03.352701 kubelet[2093]: E1002 20:00:03.352647 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:04.353217 kubelet[2093]: E1002 20:00:04.353168 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:05.353375 kubelet[2093]: E1002 20:00:05.353323 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:06.354189 kubelet[2093]: E1002 20:00:06.354130 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.145210 kubelet[2093]: E1002 20:00:07.145157 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.354721 kubelet[2093]: E1002 20:00:07.354669 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.381724 kubelet[2093]: E1002 20:00:07.381690 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:08.355519 kubelet[2093]: E1002 20:00:08.355464 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:09.355849 kubelet[2093]: E1002 20:00:09.355804 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:10.356315 kubelet[2093]: E1002 20:00:10.356262 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:11.356685 kubelet[2093]: E1002 20:00:11.356633 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:12.357722 kubelet[2093]: E1002 20:00:12.357672 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:12.382796 kubelet[2093]: E1002 20:00:12.382754 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:13.358093 kubelet[2093]: E1002 20:00:13.358032 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:13.534323 amazon-ssm-agent[1615]: 2023-10-02 20:00:13 INFO Backing off health check to every 600 seconds for 1800 seconds. Oct 2 20:00:13.635055 amazon-ssm-agent[1615]: 2023-10-02 20:00:13 ERROR Health ping failed with error - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-07983d4f322ce3793 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-07983d4f322ce3793 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 20:00:13.635055 amazon-ssm-agent[1615]: status code: 400, request id: 42961381-5cbc-49c7-b233-3a7db38acf72 Oct 2 20:00:14.359160 kubelet[2093]: E1002 20:00:14.359112 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:15.359819 kubelet[2093]: E1002 20:00:15.359745 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:15.998346 kubelet[2093]: E1002 20:00:15.998288 2093 controller.go:187] failed to update lease, error: Put "https://172.31.23.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.171?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 20:00:16.360697 kubelet[2093]: E1002 20:00:16.360571 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:17.361896 kubelet[2093]: E1002 20:00:17.361839 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:17.384390 kubelet[2093]: E1002 20:00:17.384355 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:18.362256 kubelet[2093]: E1002 20:00:18.362202 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:19.363100 kubelet[2093]: E1002 20:00:19.363049 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:20.363575 kubelet[2093]: E1002 20:00:20.363522 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:21.364428 kubelet[2093]: E1002 20:00:21.364385 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:22.365385 kubelet[2093]: E1002 20:00:22.365332 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:22.385185 kubelet[2093]: E1002 20:00:22.385150 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:23.366397 kubelet[2093]: E1002 20:00:23.366356 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:24.367377 kubelet[2093]: E1002 20:00:24.367329 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:25.368266 kubelet[2093]: E1002 20:00:25.368216 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:26.000672 kubelet[2093]: E1002 20:00:26.000631 2093 controller.go:187] failed to update lease, error: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.18.171) Oct 2 20:00:26.368908 kubelet[2093]: E1002 20:00:26.368752 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:27.144514 kubelet[2093]: E1002 20:00:27.144436 2093 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:27.201500 env[1633]: time="2023-10-02T20:00:27.201459003Z" level=info msg="StopPodSandbox for \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\"" Oct 2 20:00:27.201998 env[1633]: time="2023-10-02T20:00:27.201595791Z" level=info msg="TearDown network for sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" successfully" Oct 2 20:00:27.201998 env[1633]: time="2023-10-02T20:00:27.201643791Z" level=info msg="StopPodSandbox for \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" returns successfully" Oct 2 20:00:27.202336 env[1633]: time="2023-10-02T20:00:27.202301754Z" level=info msg="RemovePodSandbox for \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\"" Oct 2 20:00:27.202460 env[1633]: time="2023-10-02T20:00:27.202338263Z" level=info msg="Forcibly stopping sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\"" Oct 2 20:00:27.202460 env[1633]: time="2023-10-02T20:00:27.202418787Z" level=info msg="TearDown network for sandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" successfully" Oct 2 20:00:27.205742 env[1633]: time="2023-10-02T20:00:27.205706916Z" level=info msg="RemovePodSandbox \"bfb9072015e106138c1a277f080bf550abc3e82d0ff9646d7a18a136045e7130\" returns successfully" Oct 2 20:00:27.206299 env[1633]: time="2023-10-02T20:00:27.206184942Z" level=info msg="StopPodSandbox for \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\"" Oct 2 20:00:27.206400 env[1633]: time="2023-10-02T20:00:27.206340383Z" level=info msg="TearDown network for sandbox \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\" successfully" Oct 2 20:00:27.206400 env[1633]: time="2023-10-02T20:00:27.206385974Z" level=info msg="StopPodSandbox for \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\" returns successfully" Oct 2 20:00:27.206732 env[1633]: time="2023-10-02T20:00:27.206705726Z" level=info msg="RemovePodSandbox for \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\"" Oct 2 20:00:27.206828 env[1633]: time="2023-10-02T20:00:27.206734498Z" level=info msg="Forcibly stopping sandbox \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\"" Oct 2 20:00:27.206828 env[1633]: time="2023-10-02T20:00:27.206812637Z" level=info msg="TearDown network for sandbox \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\" successfully" Oct 2 20:00:27.210482 env[1633]: time="2023-10-02T20:00:27.210447032Z" level=info msg="RemovePodSandbox \"293ee19ace905daa02000b38c5112ff47b1f90180d131596f5a549beefd78321\" returns successfully" Oct 2 20:00:27.369138 kubelet[2093]: E1002 20:00:27.369100 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:27.386960 kubelet[2093]: E1002 20:00:27.386933 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:28.369730 kubelet[2093]: E1002 20:00:28.369674 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:29.370776 kubelet[2093]: E1002 20:00:29.369788 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:30.371431 kubelet[2093]: E1002 20:00:30.371378 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:31.372057 kubelet[2093]: E1002 20:00:31.372004 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:32.372988 kubelet[2093]: E1002 20:00:32.372896 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:32.389255 kubelet[2093]: E1002 20:00:32.389179 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:33.373852 kubelet[2093]: E1002 20:00:33.373801 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:34.374785 kubelet[2093]: E1002 20:00:34.374731 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:34.737931 kubelet[2093]: E1002 20:00:34.737344 2093 controller.go:187] failed to update lease, error: Put "https://172.31.23.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.171?timeout=10s": unexpected EOF Oct 2 20:00:34.737931 kubelet[2093]: E1002 20:00:34.737712 2093 controller.go:187] failed to update lease, error: Put "https://172.31.23.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.171?timeout=10s": dial tcp 172.31.23.8:6443: connect: connection refused Oct 2 20:00:34.738662 kubelet[2093]: E1002 20:00:34.738637 2093 controller.go:187] failed to update lease, error: Put "https://172.31.23.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.171?timeout=10s": dial tcp 172.31.23.8:6443: connect: connection refused Oct 2 20:00:34.738662 kubelet[2093]: I1002 20:00:34.738664 2093 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Oct 2 20:00:34.739281 kubelet[2093]: E1002 20:00:34.739252 2093 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.23.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.171?timeout=10s": dial tcp 172.31.23.8:6443: connect: connection refused Oct 2 20:00:34.940121 kubelet[2093]: E1002 20:00:34.940052 2093 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.23.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.171?timeout=10s": dial tcp 172.31.23.8:6443: connect: connection refused Oct 2 20:00:35.342348 kubelet[2093]: E1002 20:00:35.342289 2093 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.23.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.171?timeout=10s": dial tcp 172.31.23.8:6443: connect: connection refused Oct 2 20:00:35.376381 kubelet[2093]: E1002 20:00:35.376115 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:35.745337 kubelet[2093]: E1002 20:00:35.745053 2093 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"172.31.18.171\": Get \"https://172.31.23.8:6443/api/v1/nodes/172.31.18.171?resourceVersion=0&timeout=10s\": dial tcp 172.31.23.8:6443: connect: connection refused" Oct 2 20:00:35.745550 kubelet[2093]: E1002 20:00:35.745526 2093 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"172.31.18.171\": Get \"https://172.31.23.8:6443/api/v1/nodes/172.31.18.171?timeout=10s\": dial tcp 172.31.23.8:6443: connect: connection refused" Oct 2 20:00:35.746173 kubelet[2093]: E1002 20:00:35.746066 2093 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"172.31.18.171\": Get \"https://172.31.23.8:6443/api/v1/nodes/172.31.18.171?timeout=10s\": dial tcp 172.31.23.8:6443: connect: connection refused" Oct 2 20:00:35.747055 kubelet[2093]: E1002 20:00:35.747033 2093 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"172.31.18.171\": Get \"https://172.31.23.8:6443/api/v1/nodes/172.31.18.171?timeout=10s\": dial tcp 172.31.23.8:6443: connect: connection refused" Oct 2 20:00:35.747724 kubelet[2093]: E1002 20:00:35.747701 2093 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"172.31.18.171\": Get \"https://172.31.23.8:6443/api/v1/nodes/172.31.18.171?timeout=10s\": dial tcp 172.31.23.8:6443: connect: connection refused" Oct 2 20:00:35.747724 kubelet[2093]: E1002 20:00:35.747720 2093 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count" Oct 2 20:00:36.377100 kubelet[2093]: E1002 20:00:36.377038 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:37.377684 kubelet[2093]: E1002 20:00:37.377441 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:37.390269 kubelet[2093]: E1002 20:00:37.390230 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:38.378017 kubelet[2093]: E1002 20:00:38.377962 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:39.378411 kubelet[2093]: E1002 20:00:39.378357 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:40.379013 kubelet[2093]: E1002 20:00:40.378970 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:41.380108 kubelet[2093]: E1002 20:00:41.380014 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:42.380750 kubelet[2093]: E1002 20:00:42.380695 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:42.391867 kubelet[2093]: E1002 20:00:42.391828 2093 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:43.381747 kubelet[2093]: E1002 20:00:43.381696 2093 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"