Oct 2 19:54:52.071713 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:54:52.071733 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:54:52.071745 kernel: BIOS-provided physical RAM map: Oct 2 19:54:52.071753 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 19:54:52.071759 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 19:54:52.071766 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 19:54:52.071774 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Oct 2 19:54:52.071781 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Oct 2 19:54:52.071789 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 2 19:54:52.071796 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 19:54:52.071803 kernel: NX (Execute Disable) protection: active Oct 2 19:54:52.071809 kernel: SMBIOS 2.8 present. Oct 2 19:54:52.071816 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 2 19:54:52.071823 kernel: Hypervisor detected: KVM Oct 2 19:54:52.071831 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:54:52.071839 kernel: kvm-clock: cpu 0, msr 58f8a001, primary cpu clock Oct 2 19:54:52.071846 kernel: kvm-clock: using sched offset of 6109168903 cycles Oct 2 19:54:52.071854 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:54:52.071861 kernel: tsc: Detected 1996.249 MHz processor Oct 2 19:54:52.071869 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:54:52.071877 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:54:52.071884 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Oct 2 19:54:52.071891 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:54:52.071900 kernel: ACPI: Early table checksum verification disabled Oct 2 19:54:52.071907 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Oct 2 19:54:52.071915 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:54:52.071922 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:54:52.071929 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:54:52.071937 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 2 19:54:52.071944 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:54:52.071951 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:54:52.071958 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Oct 2 19:54:52.071967 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Oct 2 19:54:52.071974 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 2 19:54:52.071982 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Oct 2 19:54:52.071989 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Oct 2 19:54:52.071996 kernel: No NUMA configuration found Oct 2 19:54:52.072003 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Oct 2 19:54:52.072010 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Oct 2 19:54:52.072017 kernel: Zone ranges: Oct 2 19:54:52.072030 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:54:52.072037 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Oct 2 19:54:52.086127 kernel: Normal empty Oct 2 19:54:52.086141 kernel: Movable zone start for each node Oct 2 19:54:52.086150 kernel: Early memory node ranges Oct 2 19:54:52.086160 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 19:54:52.086177 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Oct 2 19:54:52.086186 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Oct 2 19:54:52.086194 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:54:52.086203 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 19:54:52.086212 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Oct 2 19:54:52.086220 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 2 19:54:52.086229 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:54:52.086237 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:54:52.086246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:54:52.086256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:54:52.086265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:54:52.086273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:54:52.086282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:54:52.086290 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:54:52.086299 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 19:54:52.086307 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 2 19:54:52.086315 kernel: Booting paravirtualized kernel on KVM Oct 2 19:54:52.086324 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:54:52.086333 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 19:54:52.086343 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 19:54:52.086352 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 19:54:52.086361 kernel: pcpu-alloc: [0] 0 1 Oct 2 19:54:52.086369 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Oct 2 19:54:52.086377 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 2 19:54:52.086385 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Oct 2 19:54:52.086394 kernel: Policy zone: DMA32 Oct 2 19:54:52.086406 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:54:52.086417 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:54:52.086425 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:54:52.086434 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 19:54:52.086442 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:54:52.086452 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 121020K reserved, 0K cma-reserved) Oct 2 19:54:52.086460 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:54:52.086468 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:54:52.086477 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:54:52.086487 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:54:52.086496 kernel: rcu: RCU event tracing is enabled. Oct 2 19:54:52.086505 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:54:52.086514 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:54:52.086523 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:54:52.086531 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:54:52.086540 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:54:52.086548 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 2 19:54:52.086556 kernel: Console: colour VGA+ 80x25 Oct 2 19:54:52.086566 kernel: printk: console [tty0] enabled Oct 2 19:54:52.086574 kernel: printk: console [ttyS0] enabled Oct 2 19:54:52.086583 kernel: ACPI: Core revision 20210730 Oct 2 19:54:52.086591 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:54:52.086600 kernel: x2apic enabled Oct 2 19:54:52.086608 kernel: Switched APIC routing to physical x2apic. Oct 2 19:54:52.086616 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:54:52.086625 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:54:52.086633 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Oct 2 19:54:52.086642 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 2 19:54:52.086653 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 2 19:54:52.086662 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:54:52.086670 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:54:52.086678 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:54:52.086687 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:54:52.086695 kernel: Speculative Store Bypass: Vulnerable Oct 2 19:54:52.086703 kernel: x86/fpu: x87 FPU will use FXSAVE Oct 2 19:54:52.086712 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:54:52.086720 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:54:52.086730 kernel: LSM: Security Framework initializing Oct 2 19:54:52.086738 kernel: SELinux: Initializing. Oct 2 19:54:52.086747 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:54:52.086755 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:54:52.086764 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Oct 2 19:54:52.086772 kernel: Performance Events: AMD PMU driver. Oct 2 19:54:52.086780 kernel: ... version: 0 Oct 2 19:54:52.086788 kernel: ... bit width: 48 Oct 2 19:54:52.086797 kernel: ... generic registers: 4 Oct 2 19:54:52.086813 kernel: ... value mask: 0000ffffffffffff Oct 2 19:54:52.086821 kernel: ... max period: 00007fffffffffff Oct 2 19:54:52.086832 kernel: ... fixed-purpose events: 0 Oct 2 19:54:52.086840 kernel: ... event mask: 000000000000000f Oct 2 19:54:52.086849 kernel: signal: max sigframe size: 1440 Oct 2 19:54:52.086858 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:54:52.086866 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:54:52.086875 kernel: x86: Booting SMP configuration: Oct 2 19:54:52.086885 kernel: .... node #0, CPUs: #1 Oct 2 19:54:52.086894 kernel: kvm-clock: cpu 1, msr 58f8a041, secondary cpu clock Oct 2 19:54:52.086903 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Oct 2 19:54:52.086911 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:54:52.086920 kernel: smpboot: Max logical packages: 2 Oct 2 19:54:52.086930 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Oct 2 19:54:52.086939 kernel: devtmpfs: initialized Oct 2 19:54:52.086947 kernel: x86/mm: Memory block size: 128MB Oct 2 19:54:52.086955 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:54:52.086965 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:54:52.086973 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:54:52.086981 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:54:52.086989 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:54:52.086998 kernel: audit: type=2000 audit(1696276491.368:1): state=initialized audit_enabled=0 res=1 Oct 2 19:54:52.087006 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:54:52.087014 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:54:52.087022 kernel: cpuidle: using governor menu Oct 2 19:54:52.087030 kernel: ACPI: bus type PCI registered Oct 2 19:54:52.087053 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:54:52.087062 kernel: dca service started, version 1.12.1 Oct 2 19:54:52.087070 kernel: PCI: Using configuration type 1 for base access Oct 2 19:54:52.087078 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:54:52.087086 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:54:52.087095 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:54:52.087103 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:54:52.087111 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:54:52.087119 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:54:52.087129 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:54:52.087137 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:54:52.087145 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:54:52.087154 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:54:52.087162 kernel: ACPI: Interpreter enabled Oct 2 19:54:52.087170 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:54:52.087178 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:54:52.087186 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:54:52.087195 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:54:52.087205 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:54:52.087367 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:54:52.087460 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Oct 2 19:54:52.087473 kernel: acpiphp: Slot [3] registered Oct 2 19:54:52.087482 kernel: acpiphp: Slot [4] registered Oct 2 19:54:52.087491 kernel: acpiphp: Slot [5] registered Oct 2 19:54:52.087500 kernel: acpiphp: Slot [6] registered Oct 2 19:54:52.087511 kernel: acpiphp: Slot [7] registered Oct 2 19:54:52.087519 kernel: acpiphp: Slot [8] registered Oct 2 19:54:52.087528 kernel: acpiphp: Slot [9] registered Oct 2 19:54:52.087536 kernel: acpiphp: Slot [10] registered Oct 2 19:54:52.087545 kernel: acpiphp: Slot [11] registered Oct 2 19:54:52.087554 kernel: acpiphp: Slot [12] registered Oct 2 19:54:52.087562 kernel: acpiphp: Slot [13] registered Oct 2 19:54:52.087575 kernel: acpiphp: Slot [14] registered Oct 2 19:54:52.087588 kernel: acpiphp: Slot [15] registered Oct 2 19:54:52.087602 kernel: acpiphp: Slot [16] registered Oct 2 19:54:52.087622 kernel: acpiphp: Slot [17] registered Oct 2 19:54:52.087636 kernel: acpiphp: Slot [18] registered Oct 2 19:54:52.087650 kernel: acpiphp: Slot [19] registered Oct 2 19:54:52.087660 kernel: acpiphp: Slot [20] registered Oct 2 19:54:52.087670 kernel: acpiphp: Slot [21] registered Oct 2 19:54:52.087680 kernel: acpiphp: Slot [22] registered Oct 2 19:54:52.087689 kernel: acpiphp: Slot [23] registered Oct 2 19:54:52.087699 kernel: acpiphp: Slot [24] registered Oct 2 19:54:52.087708 kernel: acpiphp: Slot [25] registered Oct 2 19:54:52.087720 kernel: acpiphp: Slot [26] registered Oct 2 19:54:52.087730 kernel: acpiphp: Slot [27] registered Oct 2 19:54:52.087741 kernel: acpiphp: Slot [28] registered Oct 2 19:54:52.087754 kernel: acpiphp: Slot [29] registered Oct 2 19:54:52.087769 kernel: acpiphp: Slot [30] registered Oct 2 19:54:52.087783 kernel: acpiphp: Slot [31] registered Oct 2 19:54:52.087799 kernel: PCI host bridge to bus 0000:00 Oct 2 19:54:52.087929 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:54:52.088027 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:54:52.088357 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:54:52.088443 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 2 19:54:52.088522 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 2 19:54:52.088599 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:54:52.088704 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:54:52.088805 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:54:52.088911 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:54:52.089003 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Oct 2 19:54:52.089150 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:54:52.089238 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:54:52.089326 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:54:52.089432 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:54:52.089531 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:54:52.089625 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 2 19:54:52.089713 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 2 19:54:52.089846 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 2 19:54:52.089938 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 2 19:54:52.090037 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 2 19:54:52.090155 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 2 19:54:52.090248 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 2 19:54:52.090336 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:54:52.090430 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:54:52.090519 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 2 19:54:52.090606 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 2 19:54:52.090693 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 2 19:54:52.090781 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 2 19:54:52.090885 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:54:52.090974 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:54:52.091083 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 2 19:54:52.091173 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 2 19:54:52.091278 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 2 19:54:52.091370 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 2 19:54:52.091459 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 2 19:54:52.091561 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:54:52.091651 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Oct 2 19:54:52.091738 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 2 19:54:52.091751 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:54:52.091760 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:54:52.091769 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:54:52.091777 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:54:52.091786 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:54:52.091798 kernel: iommu: Default domain type: Translated Oct 2 19:54:52.091807 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:54:52.091894 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:54:52.091982 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:54:52.092111 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:54:52.092125 kernel: vgaarb: loaded Oct 2 19:54:52.092134 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:54:52.092144 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:54:52.092153 kernel: PTP clock support registered Oct 2 19:54:52.092165 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:54:52.092174 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:54:52.092182 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 19:54:52.092191 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Oct 2 19:54:52.092199 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:54:52.092208 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:54:52.092217 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:54:52.092225 kernel: pnp: PnP ACPI init Oct 2 19:54:52.092322 kernel: pnp 00:03: [dma 2] Oct 2 19:54:52.092339 kernel: pnp: PnP ACPI: found 5 devices Oct 2 19:54:52.092349 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:54:52.092357 kernel: NET: Registered PF_INET protocol family Oct 2 19:54:52.092366 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:54:52.092375 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 2 19:54:52.092384 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:54:52.092393 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:54:52.092401 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 2 19:54:52.092413 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 2 19:54:52.092422 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:54:52.092431 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:54:52.092439 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:54:52.092448 kernel: NET: Registered PF_XDP protocol family Oct 2 19:54:52.092532 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:54:52.092642 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:54:52.092754 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:54:52.092857 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 2 19:54:52.092962 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 2 19:54:52.093685 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:54:52.093787 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:54:52.093874 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:54:52.093887 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:54:52.093896 kernel: Initialise system trusted keyrings Oct 2 19:54:52.093905 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 2 19:54:52.093917 kernel: Key type asymmetric registered Oct 2 19:54:52.093925 kernel: Asymmetric key parser 'x509' registered Oct 2 19:54:52.093934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:54:52.093943 kernel: io scheduler mq-deadline registered Oct 2 19:54:52.093952 kernel: io scheduler kyber registered Oct 2 19:54:52.093960 kernel: io scheduler bfq registered Oct 2 19:54:52.093969 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:54:52.093978 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 2 19:54:52.093987 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:54:52.093996 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 2 19:54:52.094007 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:54:52.094016 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:54:52.094025 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:54:52.094033 kernel: random: crng init done Oct 2 19:54:52.094072 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:54:52.094083 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:54:52.094091 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:54:52.094100 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:54:52.094193 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 2 19:54:52.094278 kernel: rtc_cmos 00:04: registered as rtc0 Oct 2 19:54:52.094362 kernel: rtc_cmos 00:04: setting system clock to 2023-10-02T19:54:51 UTC (1696276491) Oct 2 19:54:52.094445 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 2 19:54:52.094457 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:54:52.094466 kernel: Segment Routing with IPv6 Oct 2 19:54:52.094474 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:54:52.094508 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:54:52.094518 kernel: Key type dns_resolver registered Oct 2 19:54:52.094531 kernel: IPI shorthand broadcast: enabled Oct 2 19:54:52.094539 kernel: sched_clock: Marking stable (725998723, 122110877)->(906699314, -58589714) Oct 2 19:54:52.094548 kernel: registered taskstats version 1 Oct 2 19:54:52.094557 kernel: Loading compiled-in X.509 certificates Oct 2 19:54:52.094565 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:54:52.094574 kernel: Key type .fscrypt registered Oct 2 19:54:52.094583 kernel: Key type fscrypt-provisioning registered Oct 2 19:54:52.094592 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:54:52.094602 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:54:52.094611 kernel: ima: No architecture policies found Oct 2 19:54:52.094619 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:54:52.094628 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:54:52.094637 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:54:52.094645 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:54:52.094654 kernel: Run /init as init process Oct 2 19:54:52.094662 kernel: with arguments: Oct 2 19:54:52.094671 kernel: /init Oct 2 19:54:52.094681 kernel: with environment: Oct 2 19:54:52.094690 kernel: HOME=/ Oct 2 19:54:52.094698 kernel: TERM=linux Oct 2 19:54:52.094706 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:54:52.094719 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:54:52.094730 systemd[1]: Detected virtualization kvm. Oct 2 19:54:52.094740 systemd[1]: Detected architecture x86-64. Oct 2 19:54:52.094750 systemd[1]: Running in initrd. Oct 2 19:54:52.094761 systemd[1]: No hostname configured, using default hostname. Oct 2 19:54:52.094770 systemd[1]: Hostname set to . Oct 2 19:54:52.094780 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:54:52.094789 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:54:52.094798 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:54:52.094807 systemd[1]: Reached target cryptsetup.target. Oct 2 19:54:52.094817 systemd[1]: Reached target paths.target. Oct 2 19:54:52.094826 systemd[1]: Reached target slices.target. Oct 2 19:54:52.094837 systemd[1]: Reached target swap.target. Oct 2 19:54:52.094846 systemd[1]: Reached target timers.target. Oct 2 19:54:52.094856 systemd[1]: Listening on iscsid.socket. Oct 2 19:54:52.094865 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:54:52.094874 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:54:52.094884 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:54:52.094893 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:54:52.094904 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:54:52.094913 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:54:52.094922 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:54:52.094932 systemd[1]: Reached target sockets.target. Oct 2 19:54:52.094942 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:54:52.094959 systemd[1]: Finished network-cleanup.service. Oct 2 19:54:52.094969 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:54:52.094979 systemd[1]: Starting systemd-journald.service... Oct 2 19:54:52.094988 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:54:52.094997 systemd[1]: Starting systemd-resolved.service... Oct 2 19:54:52.095006 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:54:52.095015 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:54:52.095024 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:54:52.095033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:54:52.095098 kernel: audit: type=1130 audit(1696276492.082:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.095108 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:54:52.095123 systemd-journald[185]: Journal started Oct 2 19:54:52.095174 systemd-journald[185]: Runtime Journal (/run/log/journal/3c440af97e864f3e9351c298ccf35db5) is 4.9M, max 39.5M, 34.5M free. Oct 2 19:54:52.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.044172 systemd-modules-load[186]: Inserted module 'overlay' Oct 2 19:54:52.110372 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:54:52.110413 kernel: Bridge firewalling registered Oct 2 19:54:52.106048 systemd-modules-load[186]: Inserted module 'br_netfilter' Oct 2 19:54:52.122351 systemd[1]: Started systemd-journald.service. Oct 2 19:54:52.122373 kernel: audit: type=1130 audit(1696276492.112:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.111690 systemd-resolved[187]: Positive Trust Anchors: Oct 2 19:54:52.133685 kernel: audit: type=1130 audit(1696276492.122:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.111699 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:54:52.139787 kernel: audit: type=1130 audit(1696276492.134:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.139808 kernel: SCSI subsystem initialized Oct 2 19:54:52.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.111736 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:54:52.151411 kernel: audit: type=1130 audit(1696276492.141:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.114869 systemd-resolved[187]: Defaulting to hostname 'linux'. Oct 2 19:54:52.122924 systemd[1]: Started systemd-resolved.service. Oct 2 19:54:52.134764 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:54:52.141423 systemd[1]: Reached target nss-lookup.target. Oct 2 19:54:52.154939 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:54:52.159881 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:54:52.159910 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:54:52.162642 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:54:52.165934 systemd-modules-load[186]: Inserted module 'dm_multipath' Oct 2 19:54:52.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.166960 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:54:52.171095 kernel: audit: type=1130 audit(1696276492.167:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.171290 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:54:52.179418 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:54:52.183438 kernel: audit: type=1130 audit(1696276492.179:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.185296 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:54:52.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.186614 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:54:52.190955 kernel: audit: type=1130 audit(1696276492.185:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.196739 dracut-cmdline[209]: dracut-dracut-053 Oct 2 19:54:52.198856 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:54:52.266093 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:54:52.281070 kernel: iscsi: registered transport (tcp) Oct 2 19:54:52.305659 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:54:52.305723 kernel: QLogic iSCSI HBA Driver Oct 2 19:54:52.360161 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:54:52.361700 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:54:52.372487 kernel: audit: type=1130 audit(1696276492.360:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.421167 kernel: raid6: sse2x4 gen() 13151 MB/s Oct 2 19:54:52.438161 kernel: raid6: sse2x4 xor() 4853 MB/s Oct 2 19:54:52.455121 kernel: raid6: sse2x2 gen() 14342 MB/s Oct 2 19:54:52.472114 kernel: raid6: sse2x2 xor() 8748 MB/s Oct 2 19:54:52.489135 kernel: raid6: sse2x1 gen() 11015 MB/s Oct 2 19:54:52.506835 kernel: raid6: sse2x1 xor() 7017 MB/s Oct 2 19:54:52.506906 kernel: raid6: using algorithm sse2x2 gen() 14342 MB/s Oct 2 19:54:52.506933 kernel: raid6: .... xor() 8748 MB/s, rmw enabled Oct 2 19:54:52.507654 kernel: raid6: using ssse3x2 recovery algorithm Oct 2 19:54:52.522133 kernel: xor: measuring software checksum speed Oct 2 19:54:52.524639 kernel: prefetch64-sse : 18470 MB/sec Oct 2 19:54:52.524697 kernel: generic_sse : 16750 MB/sec Oct 2 19:54:52.524724 kernel: xor: using function: prefetch64-sse (18470 MB/sec) Oct 2 19:54:52.639099 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:54:52.652083 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:54:52.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.656000 audit: BPF prog-id=7 op=LOAD Oct 2 19:54:52.656000 audit: BPF prog-id=8 op=LOAD Oct 2 19:54:52.657321 systemd[1]: Starting systemd-udevd.service... Oct 2 19:54:52.675659 systemd-udevd[387]: Using default interface naming scheme 'v252'. Oct 2 19:54:52.686133 systemd[1]: Started systemd-udevd.service. Oct 2 19:54:52.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.690130 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:54:52.709984 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation Oct 2 19:54:52.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.761510 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:54:52.764445 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:54:52.802471 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:54:52.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:52.877071 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Oct 2 19:54:52.900080 kernel: libata version 3.00 loaded. Oct 2 19:54:52.905120 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:54:52.909844 kernel: scsi host0: ata_piix Oct 2 19:54:52.912075 kernel: scsi host1: ata_piix Oct 2 19:54:52.912237 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Oct 2 19:54:52.912261 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Oct 2 19:54:52.987942 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:54:52.988014 kernel: GPT:17805311 != 41943039 Oct 2 19:54:52.988035 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:54:52.990216 kernel: GPT:17805311 != 41943039 Oct 2 19:54:52.991856 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:54:52.993902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:54:53.242125 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (438) Oct 2 19:54:53.273809 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:54:53.286423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:54:53.297624 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:54:53.307179 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:54:53.308472 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:54:53.312852 systemd[1]: Starting disk-uuid.service... Oct 2 19:54:53.330265 disk-uuid[455]: Primary Header is updated. Oct 2 19:54:53.330265 disk-uuid[455]: Secondary Entries is updated. Oct 2 19:54:53.330265 disk-uuid[455]: Secondary Header is updated. Oct 2 19:54:53.337080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:54:53.348101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:54:54.362110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:54:54.363454 disk-uuid[456]: The operation has completed successfully. Oct 2 19:54:54.430585 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:54:54.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.430805 systemd[1]: Finished disk-uuid.service. Oct 2 19:54:54.447990 systemd[1]: Starting verity-setup.service... Oct 2 19:54:54.467405 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:54:54.585794 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:54:54.590419 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:54:54.596916 systemd[1]: Finished verity-setup.service. Oct 2 19:54:54.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.765088 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:54:54.765477 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:54:54.766174 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:54:54.767020 systemd[1]: Starting ignition-setup.service... Oct 2 19:54:54.768118 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:54:54.791486 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:54:54.791557 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:54:54.791569 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:54:54.814877 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:54:54.835445 systemd[1]: Finished ignition-setup.service. Oct 2 19:54:54.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.838554 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:54:54.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:54.965866 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:54:54.967000 audit: BPF prog-id=9 op=LOAD Oct 2 19:54:54.968604 systemd[1]: Starting systemd-networkd.service... Oct 2 19:54:55.007666 systemd-networkd[626]: lo: Link UP Oct 2 19:54:55.008487 systemd-networkd[626]: lo: Gained carrier Oct 2 19:54:55.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.009796 systemd-networkd[626]: Enumeration completed Oct 2 19:54:55.009909 systemd[1]: Started systemd-networkd.service. Oct 2 19:54:55.010520 systemd[1]: Reached target network.target. Oct 2 19:54:55.011326 systemd-networkd[626]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:54:55.013360 systemd-networkd[626]: eth0: Link UP Oct 2 19:54:55.013365 systemd-networkd[626]: eth0: Gained carrier Oct 2 19:54:55.014386 systemd[1]: Starting iscsiuio.service... Oct 2 19:54:55.024895 systemd[1]: Started iscsiuio.service. Oct 2 19:54:55.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.027417 systemd[1]: Starting iscsid.service... Oct 2 19:54:55.031032 iscsid[635]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:54:55.031032 iscsid[635]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:54:55.031032 iscsid[635]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:54:55.031032 iscsid[635]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:54:55.031032 iscsid[635]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:54:55.031032 iscsid[635]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:54:55.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.033896 systemd[1]: Started iscsid.service. Oct 2 19:54:55.035827 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:54:55.037704 systemd-networkd[626]: eth0: DHCPv4 address 172.24.4.32/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 2 19:54:55.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.050653 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:54:55.051265 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:54:55.051683 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:54:55.052181 systemd[1]: Reached target remote-fs.target. Oct 2 19:54:55.053564 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:54:55.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.061509 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:54:55.203263 ignition[550]: Ignition 2.14.0 Oct 2 19:54:55.203291 ignition[550]: Stage: fetch-offline Oct 2 19:54:55.203438 ignition[550]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:55.203494 ignition[550]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 19:54:55.205881 ignition[550]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 19:54:55.206178 ignition[550]: parsed url from cmdline: "" Oct 2 19:54:55.209695 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:54:55.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.206188 ignition[550]: no config URL provided Oct 2 19:54:55.206202 ignition[550]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:54:55.213425 systemd[1]: Starting ignition-fetch.service... Oct 2 19:54:55.206234 ignition[550]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:54:55.206246 ignition[550]: failed to fetch config: resource requires networking Oct 2 19:54:55.206724 ignition[550]: Ignition finished successfully Oct 2 19:54:55.231905 ignition[650]: Ignition 2.14.0 Oct 2 19:54:55.231937 ignition[650]: Stage: fetch Oct 2 19:54:55.232236 ignition[650]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:55.232289 ignition[650]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 19:54:55.234649 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 19:54:55.234872 ignition[650]: parsed url from cmdline: "" Oct 2 19:54:55.234883 ignition[650]: no config URL provided Oct 2 19:54:55.234896 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:54:55.234916 ignition[650]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:54:55.240893 ignition[650]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 2 19:54:55.240953 ignition[650]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 2 19:54:55.246818 ignition[650]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 2 19:54:55.715964 ignition[650]: GET result: OK Oct 2 19:54:55.716092 ignition[650]: parsing config with SHA512: 00f377e0692bc57272544218619be5cee114be3231f80cfbe51f98ecab91a45002c0531f0457ba565f40105b923f67f2965e828a856a60abc65d29968edb0884 Oct 2 19:54:55.920834 unknown[650]: fetched base config from "system" Oct 2 19:54:55.922144 unknown[650]: fetched base config from "system" Oct 2 19:54:55.922160 unknown[650]: fetched user config from "openstack" Oct 2 19:54:55.922687 ignition[650]: fetch: fetch complete Oct 2 19:54:55.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.924228 systemd[1]: Finished ignition-fetch.service. Oct 2 19:54:55.922694 ignition[650]: fetch: fetch passed Oct 2 19:54:55.926667 systemd[1]: Starting ignition-kargs.service... Oct 2 19:54:55.922774 ignition[650]: Ignition finished successfully Oct 2 19:54:55.949368 ignition[656]: Ignition 2.14.0 Oct 2 19:54:55.949396 ignition[656]: Stage: kargs Oct 2 19:54:55.949673 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:55.949716 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 19:54:55.952025 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 19:54:55.955854 ignition[656]: kargs: kargs passed Oct 2 19:54:55.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.957814 systemd[1]: Finished ignition-kargs.service. Oct 2 19:54:55.956012 ignition[656]: Ignition finished successfully Oct 2 19:54:55.961192 systemd[1]: Starting ignition-disks.service... Oct 2 19:54:55.975297 ignition[661]: Ignition 2.14.0 Oct 2 19:54:55.976326 ignition[661]: Stage: disks Oct 2 19:54:55.977144 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:55.978616 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 19:54:55.981154 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 19:54:55.984780 ignition[661]: disks: disks passed Oct 2 19:54:55.985808 ignition[661]: Ignition finished successfully Oct 2 19:54:55.987868 systemd[1]: Finished ignition-disks.service. Oct 2 19:54:55.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:55.988521 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:54:55.989978 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:54:55.991538 systemd[1]: Reached target local-fs.target. Oct 2 19:54:55.993612 systemd[1]: Reached target sysinit.target. Oct 2 19:54:55.995107 systemd[1]: Reached target basic.target. Oct 2 19:54:55.997566 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:54:56.021397 systemd-fsck[668]: ROOT: clean, 603/1628000 files, 124049/1617920 blocks Oct 2 19:54:56.031303 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:54:56.045737 kernel: kauditd_printk_skb: 21 callbacks suppressed Oct 2 19:54:56.045785 kernel: audit: type=1130 audit(1696276496.032:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.034454 systemd[1]: Mounting sysroot.mount... Oct 2 19:54:56.065101 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:54:56.066937 systemd[1]: Mounted sysroot.mount. Oct 2 19:54:56.069413 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:54:56.073515 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:54:56.075399 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:54:56.076758 systemd[1]: Starting flatcar-openstack-hostname.service... Oct 2 19:54:56.081575 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:54:56.081655 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:54:56.090040 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:54:56.099288 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:54:56.103615 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:54:56.118224 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (675) Oct 2 19:54:56.121242 initrd-setup-root[680]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:54:56.132034 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:54:56.132119 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:54:56.132133 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:54:56.138499 initrd-setup-root[704]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:54:56.151799 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:54:56.152507 initrd-setup-root[714]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:54:56.159750 initrd-setup-root[722]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:54:56.167422 systemd-networkd[626]: eth0: Gained IPv6LL Oct 2 19:54:56.326648 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:54:56.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.329313 systemd[1]: Starting ignition-mount.service... Oct 2 19:54:56.344413 kernel: audit: type=1130 audit(1696276496.327:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.339805 systemd[1]: Starting sysroot-boot.service... Oct 2 19:54:56.350493 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:54:56.350729 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:54:56.407746 ignition[742]: INFO : Ignition 2.14.0 Oct 2 19:54:56.407746 ignition[742]: INFO : Stage: mount Oct 2 19:54:56.411238 ignition[742]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:56.411238 ignition[742]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 19:54:56.411238 ignition[742]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 19:54:56.418309 ignition[742]: INFO : mount: mount passed Oct 2 19:54:56.418309 ignition[742]: INFO : Ignition finished successfully Oct 2 19:54:56.430699 kernel: audit: type=1130 audit(1696276496.419:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.417261 systemd[1]: Finished ignition-mount.service. Oct 2 19:54:56.522198 systemd[1]: Finished sysroot-boot.service. Oct 2 19:54:56.527214 kernel: audit: type=1130 audit(1696276496.523:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.550951 coreos-metadata[674]: Oct 02 19:54:56.550 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 2 19:54:56.586424 coreos-metadata[674]: Oct 02 19:54:56.586 INFO Fetch successful Oct 2 19:54:56.587964 coreos-metadata[674]: Oct 02 19:54:56.587 INFO wrote hostname ci-3510-3-0-d-3b9d80edf7.novalocal to /sysroot/etc/hostname Oct 2 19:54:56.598221 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 2 19:54:56.598459 systemd[1]: Finished flatcar-openstack-hostname.service. Oct 2 19:54:56.618645 kernel: audit: type=1130 audit(1696276496.600:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.618697 kernel: audit: type=1131 audit(1696276496.601:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:54:56.603226 systemd[1]: Starting ignition-files.service... Oct 2 19:54:56.627885 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:54:56.715113 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (751) Oct 2 19:54:56.739678 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:54:56.739780 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:54:56.739807 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:54:56.824457 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:54:56.845082 ignition[770]: INFO : Ignition 2.14.0 Oct 2 19:54:56.845082 ignition[770]: INFO : Stage: files Oct 2 19:54:56.847813 ignition[770]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:54:56.847813 ignition[770]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 19:54:56.847813 ignition[770]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 19:54:56.875461 ignition[770]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:54:56.904827 ignition[770]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:54:56.904827 ignition[770]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:54:56.940190 ignition[770]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:54:56.942149 ignition[770]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:54:56.943914 ignition[770]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:54:56.942764 unknown[770]: wrote ssh authorized keys file for user: core Oct 2 19:54:56.947568 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:54:56.947568 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 19:54:57.145521 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:54:57.417079 ignition[770]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 19:54:57.417079 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:54:57.431906 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:54:57.431906 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 19:54:57.924307 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:54:58.060191 ignition[770]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 19:54:58.063977 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:54:58.063977 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:54:58.063977 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:54:58.216229 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:54:59.719913 ignition[770]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 19:54:59.723533 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:54:59.723533 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:54:59.723533 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:54:59.845589 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:55:02.997769 ignition[770]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 19:55:03.002513 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:55:03.002513 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:55:03.002513 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:55:03.002513 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:55:03.002513 ignition[770]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(9): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(9): op(a): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(9): op(a): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(9): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(b): [started] processing unit "coreos-metadata.service" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(b): op(c): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(b): [finished] processing unit "coreos-metadata.service" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:55:03.002513 ignition[770]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:55:03.074173 kernel: audit: type=1130 audit(1696276503.023:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.074207 kernel: audit: type=1130 audit(1696276503.053:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.074220 kernel: audit: type=1131 audit(1696276503.054:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.074233 kernel: audit: type=1130 audit(1696276503.064:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.018328 systemd[1]: Finished ignition-files.service. Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(12): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(13): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:55:03.075083 ignition[770]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:55:03.075083 ignition[770]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:55:03.075083 ignition[770]: INFO : files: files passed Oct 2 19:55:03.075083 ignition[770]: INFO : Ignition finished successfully Oct 2 19:55:03.098014 kernel: audit: type=1130 audit(1696276503.089:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.098059 kernel: audit: type=1131 audit(1696276503.089:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.028462 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:55:03.037671 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:55:03.100950 initrd-setup-root-after-ignition[795]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:55:03.039876 systemd[1]: Starting ignition-quench.service... Oct 2 19:55:03.050561 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:55:03.050740 systemd[1]: Finished ignition-quench.service. Oct 2 19:55:03.059432 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:55:03.064427 systemd[1]: Reached target ignition-complete.target. Oct 2 19:55:03.072342 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:55:03.087569 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:55:03.087665 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:55:03.089932 systemd[1]: Reached target initrd-fs.target. Oct 2 19:55:03.097552 systemd[1]: Reached target initrd.target. Oct 2 19:55:03.098537 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:55:03.099304 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:55:03.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.111351 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:55:03.116446 kernel: audit: type=1130 audit(1696276503.111:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.115959 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:55:03.125981 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:55:03.127125 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:55:03.128205 systemd[1]: Stopped target timers.target. Oct 2 19:55:03.129184 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:55:03.129837 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:55:03.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.130992 systemd[1]: Stopped target initrd.target. Oct 2 19:55:03.138302 kernel: audit: type=1131 audit(1696276503.130:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.138899 systemd[1]: Stopped target basic.target. Oct 2 19:55:03.139463 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:55:03.140364 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:55:03.141317 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:55:03.142299 systemd[1]: Stopped target remote-fs.target. Oct 2 19:55:03.143192 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:55:03.144114 systemd[1]: Stopped target sysinit.target. Oct 2 19:55:03.145014 systemd[1]: Stopped target local-fs.target. Oct 2 19:55:03.145977 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:55:03.146867 systemd[1]: Stopped target swap.target. Oct 2 19:55:03.147686 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:55:03.152251 kernel: audit: type=1131 audit(1696276503.148:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.147854 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:55:03.148701 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:55:03.157268 kernel: audit: type=1131 audit(1696276503.153:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.152767 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:55:03.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.152906 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:55:03.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.153793 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:55:03.153955 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:55:03.157886 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:55:03.161365 iscsid[635]: iscsid shutting down. Oct 2 19:55:03.158028 systemd[1]: Stopped ignition-files.service. Oct 2 19:55:03.159781 systemd[1]: Stopping ignition-mount.service... Oct 2 19:55:03.162720 systemd[1]: Stopping iscsid.service... Oct 2 19:55:03.163432 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:55:03.163588 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:55:03.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.166819 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:55:03.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.173780 ignition[808]: INFO : Ignition 2.14.0 Oct 2 19:55:03.173780 ignition[808]: INFO : Stage: umount Oct 2 19:55:03.173780 ignition[808]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:55:03.173780 ignition[808]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 19:55:03.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.167283 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:55:03.192088 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 19:55:03.192088 ignition[808]: INFO : umount: umount passed Oct 2 19:55:03.192088 ignition[808]: INFO : Ignition finished successfully Oct 2 19:55:03.167450 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:55:03.168081 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:55:03.168223 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:55:03.170880 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:55:03.170987 systemd[1]: Stopped iscsid.service. Oct 2 19:55:03.172667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:55:03.172748 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:55:03.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.174099 systemd[1]: Stopping iscsiuio.service... Oct 2 19:55:03.181209 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:55:03.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.181338 systemd[1]: Stopped iscsiuio.service. Oct 2 19:55:03.182348 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:55:03.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.207000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:55:03.182425 systemd[1]: Stopped ignition-mount.service. Oct 2 19:55:03.183901 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:55:03.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.183965 systemd[1]: Stopped ignition-disks.service. Oct 2 19:55:03.184425 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:55:03.184461 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:55:03.184924 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:55:03.184959 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:55:03.185458 systemd[1]: Stopped target network.target. Oct 2 19:55:03.185867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:55:03.185909 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:55:03.186400 systemd[1]: Stopped target paths.target. Oct 2 19:55:03.186818 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:55:03.186860 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:55:03.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.187330 systemd[1]: Stopped target slices.target. Oct 2 19:55:03.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.187749 systemd[1]: Stopped target sockets.target. Oct 2 19:55:03.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.188341 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:55:03.188376 systemd[1]: Closed iscsid.socket. Oct 2 19:55:03.188778 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:55:03.188808 systemd[1]: Closed iscsiuio.socket. Oct 2 19:55:03.189203 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:55:03.189238 systemd[1]: Stopped ignition-setup.service. Oct 2 19:55:03.193194 systemd-networkd[626]: eth0: DHCPv6 lease lost Oct 2 19:55:03.225000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:55:03.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.194516 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:55:03.195929 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:55:03.198953 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:55:03.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.200785 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:55:03.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.200883 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:55:03.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.204054 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:55:03.204190 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:55:03.206508 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:55:03.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.206612 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:55:03.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.207395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:55:03.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:03.207440 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:55:03.208168 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:55:03.208219 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:55:03.214544 systemd[1]: Stopping network-cleanup.service... Oct 2 19:55:03.215766 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:55:03.215879 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:55:03.218874 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:55:03.218926 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:55:03.219976 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:55:03.220014 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:55:03.221437 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:55:03.224589 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:55:03.225214 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:55:03.225385 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:55:03.227629 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:55:03.227663 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:55:03.228647 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:55:03.228682 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:55:03.229194 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:55:03.229251 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:55:03.229768 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:55:03.229811 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:55:03.231180 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:55:03.231224 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:55:03.233091 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:55:03.233916 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:55:03.233968 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:55:03.241572 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:55:03.241670 systemd[1]: Stopped network-cleanup.service. Oct 2 19:55:03.242422 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:55:03.242503 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:55:03.243359 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:55:03.244941 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:55:03.264476 systemd[1]: Switching root. Oct 2 19:55:03.281435 systemd-journald[185]: Journal stopped Oct 2 19:55:08.094456 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Oct 2 19:55:08.094520 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:55:08.094536 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:55:08.094548 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:55:08.094565 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:55:08.094577 kernel: SELinux: policy capability open_perms=1 Oct 2 19:55:08.094589 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:55:08.094600 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:55:08.094612 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:55:08.094623 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:55:08.094634 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:55:08.094645 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:55:08.094658 systemd[1]: Successfully loaded SELinux policy in 107.518ms. Oct 2 19:55:08.094684 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.957ms. Oct 2 19:55:08.094699 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:55:08.094712 systemd[1]: Detected virtualization kvm. Oct 2 19:55:08.094724 systemd[1]: Detected architecture x86-64. Oct 2 19:55:08.094736 systemd[1]: Detected first boot. Oct 2 19:55:08.094749 systemd[1]: Hostname set to . Oct 2 19:55:08.094763 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:55:08.094775 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:55:08.094788 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:55:08.094804 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:55:08.094819 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:55:08.094834 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:55:08.094846 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:55:08.094860 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:55:08.094875 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:55:08.094890 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:55:08.094904 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:55:08.094916 systemd[1]: Created slice system-getty.slice. Oct 2 19:55:08.094929 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:55:08.094941 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:55:08.094954 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:55:08.094967 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:55:08.094979 systemd[1]: Created slice user.slice. Oct 2 19:55:08.094994 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:55:08.095007 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:55:08.095018 systemd[1]: Set up automount boot.automount. Oct 2 19:55:08.095031 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:55:08.095060 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:55:08.095073 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:55:08.095086 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:55:08.095101 systemd[1]: Reached target integritysetup.target. Oct 2 19:55:08.095114 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:55:08.095126 systemd[1]: Reached target remote-fs.target. Oct 2 19:55:08.095138 systemd[1]: Reached target slices.target. Oct 2 19:55:08.095151 systemd[1]: Reached target swap.target. Oct 2 19:55:08.095163 systemd[1]: Reached target torcx.target. Oct 2 19:55:08.095176 systemd[1]: Reached target veritysetup.target. Oct 2 19:55:08.095189 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:55:08.095201 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:55:08.095216 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:55:08.095228 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:55:08.095241 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:55:08.095255 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:55:08.095268 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:55:08.095280 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:55:08.095292 systemd[1]: Mounting media.mount... Oct 2 19:55:08.095305 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:55:08.095318 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:55:08.095333 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:55:08.095345 systemd[1]: Mounting tmp.mount... Oct 2 19:55:08.095358 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:55:08.095370 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:55:08.095383 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:55:08.095395 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:55:08.095407 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:55:08.095420 systemd[1]: Starting modprobe@drm.service... Oct 2 19:55:08.095432 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:55:08.095446 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:55:08.095461 systemd[1]: Starting modprobe@loop.service... Oct 2 19:55:08.095473 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:55:08.095486 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:55:08.095499 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:55:08.095511 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:55:08.095525 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:55:08.095539 systemd[1]: Stopped systemd-journald.service. Oct 2 19:55:08.095550 kernel: kauditd_printk_skb: 57 callbacks suppressed Oct 2 19:55:08.095564 kernel: audit: type=1130 audit(1696276508.028:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.095575 kernel: audit: type=1131 audit(1696276508.033:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.095587 systemd[1]: Starting systemd-journald.service... Oct 2 19:55:08.095598 kernel: audit: type=1334 audit(1696276508.034:107): prog-id=18 op=LOAD Oct 2 19:55:08.095609 kernel: audit: type=1334 audit(1696276508.034:108): prog-id=19 op=LOAD Oct 2 19:55:08.095620 kernel: audit: type=1334 audit(1696276508.035:109): prog-id=20 op=LOAD Oct 2 19:55:08.095631 kernel: audit: type=1334 audit(1696276508.035:110): prog-id=16 op=UNLOAD Oct 2 19:55:08.095642 kernel: audit: type=1334 audit(1696276508.035:111): prog-id=17 op=UNLOAD Oct 2 19:55:08.095655 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:55:08.095666 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:55:08.095678 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:55:08.095690 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:55:08.095701 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:55:08.095715 systemd[1]: Stopped verity-setup.service. Oct 2 19:55:08.095727 kernel: fuse: init (API version 7.34) Oct 2 19:55:08.095738 kernel: audit: type=1131 audit(1696276508.076:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.095750 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:55:08.095762 kernel: loop: module loaded Oct 2 19:55:08.095774 kernel: audit: type=1305 audit(1696276508.092:113): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:55:08.095787 kernel: audit: type=1300 audit(1696276508.092:113): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe3bbbca40 a2=4000 a3=7ffe3bbbcadc items=0 ppid=1 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:08.095803 systemd-journald[934]: Journal started Oct 2 19:55:08.095844 systemd-journald[934]: Runtime Journal (/run/log/journal/3c440af97e864f3e9351c298ccf35db5) is 4.9M, max 39.5M, 34.5M free. Oct 2 19:55:03.684000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:55:03.828000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:55:03.828000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:55:03.828000 audit: BPF prog-id=10 op=LOAD Oct 2 19:55:03.828000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:55:03.828000 audit: BPF prog-id=11 op=LOAD Oct 2 19:55:03.828000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:55:07.814000 audit: BPF prog-id=12 op=LOAD Oct 2 19:55:07.814000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:55:07.814000 audit: BPF prog-id=13 op=LOAD Oct 2 19:55:07.814000 audit: BPF prog-id=14 op=LOAD Oct 2 19:55:07.815000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:55:07.815000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:55:07.816000 audit: BPF prog-id=15 op=LOAD Oct 2 19:55:07.816000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:55:07.816000 audit: BPF prog-id=16 op=LOAD Oct 2 19:55:07.816000 audit: BPF prog-id=17 op=LOAD Oct 2 19:55:07.816000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:55:07.816000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:55:07.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:07.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:07.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:07.833000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:55:08.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.034000 audit: BPF prog-id=18 op=LOAD Oct 2 19:55:08.034000 audit: BPF prog-id=19 op=LOAD Oct 2 19:55:08.035000 audit: BPF prog-id=20 op=LOAD Oct 2 19:55:08.035000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:55:08.035000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:55:08.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.092000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:55:08.092000 audit[934]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe3bbbca40 a2=4000 a3=7ffe3bbbcadc items=0 ppid=1 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:08.092000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:55:04.208750 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:55:07.812732 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:55:04.210117 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:55:07.812744 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:55:04.210172 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:55:07.817080 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:55:08.103070 systemd[1]: Started systemd-journald.service. Oct 2 19:55:04.210270 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:55:04.210298 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:55:04.210372 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:55:04.210407 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:55:04.210871 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:55:04.210962 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:55:04.210996 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:55:04.212692 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:55:04.212781 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:55:04.212830 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:55:08.103894 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:55:08.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:04.212872 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:55:04.212916 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:55:04.212954 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:55:07.398611 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:55:07.399357 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:55:07.399509 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:55:08.104626 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:55:07.399717 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:55:07.399781 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:55:07.399856 /usr/lib/systemd/system-generators/torcx-generator[840]: time="2023-10-02T19:55:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:55:08.105175 systemd[1]: Mounted media.mount. Oct 2 19:55:08.105677 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:55:08.106239 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:55:08.106782 systemd[1]: Mounted tmp.mount. Oct 2 19:55:08.107622 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:55:08.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.108726 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:55:08.109640 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:55:08.109804 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:55:08.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.111444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:55:08.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.111591 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:55:08.112525 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:55:08.112697 systemd[1]: Finished modprobe@drm.service. Oct 2 19:55:08.113400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:55:08.113596 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:55:08.114363 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:55:08.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.114978 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:55:08.115660 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:55:08.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.116075 systemd[1]: Finished modprobe@loop.service. Oct 2 19:55:08.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.116987 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:55:08.117910 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:55:08.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.118922 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:55:08.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.119839 systemd[1]: Reached target network-pre.target. Oct 2 19:55:08.121659 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:55:08.123729 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:55:08.126680 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:55:08.128982 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:55:08.131217 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:55:08.132137 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:55:08.133501 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:55:08.134288 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:55:08.138737 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:55:08.141729 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:55:08.146642 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:55:08.148568 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:55:08.150499 systemd-journald[934]: Time spent on flushing to /var/log/journal/3c440af97e864f3e9351c298ccf35db5 is 50.557ms for 1117 entries. Oct 2 19:55:08.150499 systemd-journald[934]: System Journal (/var/log/journal/3c440af97e864f3e9351c298ccf35db5) is 8.0M, max 584.8M, 576.8M free. Oct 2 19:55:08.223955 systemd-journald[934]: Received client request to flush runtime journal. Oct 2 19:55:08.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.164097 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:55:08.224351 udevadm[950]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:55:08.164807 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:55:08.177023 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:55:08.199634 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:55:08.202138 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:55:08.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:08.224991 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:55:08.445639 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:55:08.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.009166 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:55:09.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.011000 audit: BPF prog-id=21 op=LOAD Oct 2 19:55:09.011000 audit: BPF prog-id=22 op=LOAD Oct 2 19:55:09.012000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:55:09.012000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:55:09.013497 systemd[1]: Starting systemd-udevd.service... Oct 2 19:55:09.054850 systemd-udevd[953]: Using default interface naming scheme 'v252'. Oct 2 19:55:09.111165 systemd[1]: Started systemd-udevd.service. Oct 2 19:55:09.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.114000 audit: BPF prog-id=23 op=LOAD Oct 2 19:55:09.116947 systemd[1]: Starting systemd-networkd.service... Oct 2 19:55:09.140000 audit: BPF prog-id=24 op=LOAD Oct 2 19:55:09.141000 audit: BPF prog-id=25 op=LOAD Oct 2 19:55:09.142000 audit: BPF prog-id=26 op=LOAD Oct 2 19:55:09.144294 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:55:09.167223 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:55:09.198691 systemd[1]: Started systemd-userdbd.service. Oct 2 19:55:09.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.268160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:55:09.274110 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:55:09.279102 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:55:09.296833 systemd-networkd[967]: lo: Link UP Oct 2 19:55:09.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.296859 systemd-networkd[967]: lo: Gained carrier Oct 2 19:55:09.297338 systemd-networkd[967]: Enumeration completed Oct 2 19:55:09.297445 systemd[1]: Started systemd-networkd.service. Oct 2 19:55:09.298452 systemd-networkd[967]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:55:09.300384 systemd-networkd[967]: eth0: Link UP Oct 2 19:55:09.300394 systemd-networkd[967]: eth0: Gained carrier Oct 2 19:55:09.315243 systemd-networkd[967]: eth0: DHCPv4 address 172.24.4.32/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 2 19:55:09.303000 audit[961]: AVC avc: denied { confidentiality } for pid=961 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:55:09.303000 audit[961]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560edcdd5b00 a1=32194 a2=7f4780e19bc5 a3=5 items=106 ppid=953 pid=961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:09.303000 audit: CWD cwd="/" Oct 2 19:55:09.303000 audit: PATH item=0 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=1 name=(null) inode=14555 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=2 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=3 name=(null) inode=14556 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=4 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=5 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=6 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=7 name=(null) inode=14558 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=8 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=9 name=(null) inode=14559 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=10 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=11 name=(null) inode=14560 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=12 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=13 name=(null) inode=14561 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=14 name=(null) inode=14557 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=15 name=(null) inode=14562 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=16 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=17 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=18 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=19 name=(null) inode=14564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=20 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=21 name=(null) inode=14565 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=22 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=23 name=(null) inode=14566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=24 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=25 name=(null) inode=14567 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=26 name=(null) inode=14563 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=27 name=(null) inode=14568 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=28 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=29 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=30 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=31 name=(null) inode=14570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=32 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=33 name=(null) inode=14571 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=34 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=35 name=(null) inode=14572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=36 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=37 name=(null) inode=14573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=38 name=(null) inode=14569 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=39 name=(null) inode=14574 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=40 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=41 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=42 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=43 name=(null) inode=14576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=44 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=45 name=(null) inode=14577 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=46 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=47 name=(null) inode=14578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=48 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=49 name=(null) inode=14579 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=50 name=(null) inode=14575 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=51 name=(null) inode=14580 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=52 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=53 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=54 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=55 name=(null) inode=14582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=56 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=57 name=(null) inode=14583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=58 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=59 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=60 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=61 name=(null) inode=14585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=62 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=63 name=(null) inode=14586 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=64 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=65 name=(null) inode=14587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=66 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=67 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=68 name=(null) inode=14584 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=69 name=(null) inode=14589 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=70 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=71 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=72 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=73 name=(null) inode=14591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=74 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=75 name=(null) inode=14592 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=76 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=77 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=78 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=79 name=(null) inode=14594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=80 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=81 name=(null) inode=14595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=82 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=83 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=84 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=85 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=86 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=87 name=(null) inode=14598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=88 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=89 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=90 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=91 name=(null) inode=14600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=92 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=93 name=(null) inode=14601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=94 name=(null) inode=14581 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=95 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=96 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=97 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=98 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=99 name=(null) inode=14604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=100 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=101 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=102 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=103 name=(null) inode=14606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=104 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PATH item=105 name=(null) inode=14607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:55:09.303000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:55:09.348101 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 2 19:55:09.351078 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:55:09.357094 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:55:09.400569 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:55:09.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.403417 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:55:09.435359 lvm[982]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:55:09.464406 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:55:09.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.465894 systemd[1]: Reached target cryptsetup.target. Oct 2 19:55:09.469456 systemd[1]: Starting lvm2-activation.service... Oct 2 19:55:09.473509 lvm[983]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:55:09.505410 systemd[1]: Finished lvm2-activation.service. Oct 2 19:55:09.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.506935 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:55:09.508246 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:55:09.508313 systemd[1]: Reached target local-fs.target. Oct 2 19:55:09.509473 systemd[1]: Reached target machines.target. Oct 2 19:55:09.513393 systemd[1]: Starting ldconfig.service... Oct 2 19:55:09.516153 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:55:09.516288 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:55:09.518990 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:55:09.525281 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:55:09.529846 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:55:09.532448 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:55:09.532617 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:55:09.538350 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:55:09.562130 systemd[1]: boot.automount: Got automount request for /boot, triggered by 985 (bootctl) Oct 2 19:55:09.564608 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:55:09.619676 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:55:09.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:09.942855 systemd-tmpfiles[988]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:55:10.106667 systemd-tmpfiles[988]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:55:10.148889 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:55:10.150614 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:55:10.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.156834 systemd-tmpfiles[988]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:55:10.320553 systemd-fsck[993]: fsck.fat 4.2 (2021-01-31) Oct 2 19:55:10.320553 systemd-fsck[993]: /dev/vda1: 789 files, 115069/258078 clusters Oct 2 19:55:10.324761 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:55:10.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.326867 systemd[1]: Mounting boot.mount... Oct 2 19:55:10.344715 systemd[1]: Mounted boot.mount. Oct 2 19:55:10.379974 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:55:10.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.507569 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:55:10.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.510474 systemd[1]: Starting audit-rules.service... Oct 2 19:55:10.512018 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:55:10.513865 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:55:10.517000 audit: BPF prog-id=27 op=LOAD Oct 2 19:55:10.520201 systemd[1]: Starting systemd-resolved.service... Oct 2 19:55:10.521000 audit: BPF prog-id=28 op=LOAD Oct 2 19:55:10.522627 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:55:10.525991 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:55:10.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.538745 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:55:10.539438 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:55:10.542000 audit[1001]: SYSTEM_BOOT pid=1001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.544512 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:55:10.581997 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:55:10.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:10.620322 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:55:10.620976 systemd[1]: Reached target time-set.target. Oct 2 19:55:10.641218 augenrules[1017]: No rules Oct 2 19:55:10.641000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:55:10.641000 audit[1017]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9fc5bcf0 a2=420 a3=0 items=0 ppid=996 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:10.641000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:55:10.641817 systemd[1]: Finished audit-rules.service. Oct 2 19:55:10.655327 systemd-resolved[999]: Positive Trust Anchors: Oct 2 19:55:10.655344 systemd-resolved[999]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:55:10.655383 systemd-resolved[999]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:55:10.679425 systemd-resolved[999]: Using system hostname 'ci-3510-3-0-d-3b9d80edf7.novalocal'. Oct 2 19:55:10.681479 systemd[1]: Started systemd-resolved.service. Oct 2 19:55:10.682081 systemd[1]: Reached target network.target. Oct 2 19:55:10.682565 systemd[1]: Reached target nss-lookup.target. Oct 2 19:55:10.694300 systemd-networkd[967]: eth0: Gained IPv6LL Oct 2 19:55:10.695057 systemd-timesyncd[1000]: Network configuration changed, trying to establish connection. Oct 2 19:55:11.120983 ldconfig[984]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:55:11.140335 systemd[1]: Finished ldconfig.service. Oct 2 19:55:11.144539 systemd[1]: Starting systemd-update-done.service... Oct 2 19:55:11.161942 systemd[1]: Finished systemd-update-done.service. Oct 2 19:55:11.163355 systemd[1]: Reached target sysinit.target. Oct 2 19:55:11.164756 systemd[1]: Started motdgen.path. Oct 2 19:55:11.166105 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:55:11.167711 systemd[1]: Started logrotate.timer. Oct 2 19:55:11.169091 systemd[1]: Started mdadm.timer. Oct 2 19:55:11.170195 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:55:11.171376 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:55:11.171457 systemd[1]: Reached target paths.target. Oct 2 19:55:11.172578 systemd[1]: Reached target timers.target. Oct 2 19:55:11.174802 systemd[1]: Listening on dbus.socket. Oct 2 19:55:11.178205 systemd[1]: Starting docker.socket... Oct 2 19:55:11.185606 systemd[1]: Listening on sshd.socket. Oct 2 19:55:11.186933 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:55:11.187874 systemd[1]: Listening on docker.socket. Oct 2 19:55:11.189375 systemd[1]: Reached target sockets.target. Oct 2 19:55:11.190517 systemd[1]: Reached target basic.target. Oct 2 19:55:11.191755 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:55:11.191821 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:55:11.193938 systemd[1]: Starting containerd.service... Oct 2 19:55:11.197035 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:55:11.200439 systemd[1]: Starting dbus.service... Oct 2 19:55:11.205326 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:55:11.214493 systemd[1]: Starting extend-filesystems.service... Oct 2 19:55:11.217342 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:55:11.221436 systemd[1]: Starting motdgen.service... Oct 2 19:55:11.227258 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:55:11.233412 systemd[1]: Starting prepare-critools.service... Oct 2 19:55:11.237274 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:55:11.240495 systemd[1]: Starting sshd-keygen.service... Oct 2 19:55:11.245218 systemd[1]: Starting systemd-logind.service... Oct 2 19:55:11.245781 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:55:11.245855 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:55:11.246693 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:55:11.247434 systemd[1]: Starting update-engine.service... Oct 2 19:55:11.251134 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:55:11.262835 jq[1039]: true Oct 2 19:55:11.275116 tar[1041]: ./ Oct 2 19:55:11.275116 tar[1041]: ./macvlan Oct 2 19:55:11.271740 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:55:11.271921 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:55:11.280987 jq[1030]: false Oct 2 19:55:11.281473 tar[1042]: crictl Oct 2 19:55:11.286235 systemd[1]: Created slice system-sshd.slice. Oct 2 19:55:11.290736 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:55:11.290914 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:55:11.291393 jq[1044]: true Oct 2 19:55:11.343988 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:55:11.344191 systemd[1]: Finished motdgen.service. Oct 2 19:55:11.363947 extend-filesystems[1031]: Found vda Oct 2 19:55:11.363947 extend-filesystems[1031]: Found vda1 Oct 2 19:55:11.368313 extend-filesystems[1031]: Found vda2 Oct 2 19:55:11.368313 extend-filesystems[1031]: Found vda3 Oct 2 19:55:11.368313 extend-filesystems[1031]: Found usr Oct 2 19:55:11.368313 extend-filesystems[1031]: Found vda4 Oct 2 19:55:11.368313 extend-filesystems[1031]: Found vda6 Oct 2 19:55:11.368313 extend-filesystems[1031]: Found vda7 Oct 2 19:55:11.368313 extend-filesystems[1031]: Found vda9 Oct 2 19:55:11.368313 extend-filesystems[1031]: Checking size of /dev/vda9 Oct 2 19:55:11.387459 dbus-daemon[1027]: [system] SELinux support is enabled Oct 2 19:55:11.387710 systemd[1]: Started dbus.service. Oct 2 19:55:11.390449 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:55:11.390471 systemd[1]: Reached target system-config.target. Oct 2 19:55:11.391024 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:55:11.391095 systemd[1]: Reached target user-config.target. Oct 2 19:55:11.401913 env[1043]: time="2023-10-02T19:55:11.401849573Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:55:11.415399 bash[1078]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:55:11.415625 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:55:11.422210 extend-filesystems[1031]: Resized partition /dev/vda9 Oct 2 19:55:11.428989 update_engine[1038]: I1002 19:55:11.427899 1038 main.cc:92] Flatcar Update Engine starting Oct 2 19:55:11.436261 systemd-logind[1037]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:55:11.438823 systemd[1]: Started update-engine.service. Oct 2 19:55:11.441490 systemd[1]: Started locksmithd.service. Oct 2 19:55:11.441792 systemd-logind[1037]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:55:11.442180 systemd-logind[1037]: New seat seat0. Oct 2 19:55:11.442515 update_engine[1038]: I1002 19:55:11.442462 1038 update_check_scheduler.cc:74] Next update check in 3m0s Oct 2 19:55:11.450765 extend-filesystems[1086]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:55:11.446227 systemd[1]: Started systemd-logind.service. Oct 2 19:55:11.482078 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Oct 2 19:55:11.510865 coreos-metadata[1026]: Oct 02 19:55:11.510 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 2 19:55:11.518857 tar[1041]: ./static Oct 2 19:55:11.544607 env[1043]: time="2023-10-02T19:55:11.544540530Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:55:11.545108 env[1043]: time="2023-10-02T19:55:11.545078489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:11.549546 env[1043]: time="2023-10-02T19:55:11.549505011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:55:11.549546 env[1043]: time="2023-10-02T19:55:11.549542301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:11.550839 env[1043]: time="2023-10-02T19:55:11.550802475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:55:11.550839 env[1043]: time="2023-10-02T19:55:11.550831669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:11.550919 env[1043]: time="2023-10-02T19:55:11.550853771Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:55:11.550919 env[1043]: time="2023-10-02T19:55:11.550867046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:11.551011 env[1043]: time="2023-10-02T19:55:11.550984546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:11.551646 env[1043]: time="2023-10-02T19:55:11.551616952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:55:11.551799 env[1043]: time="2023-10-02T19:55:11.551766663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:55:11.553101 env[1043]: time="2023-10-02T19:55:11.551793533Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:55:11.553183 env[1043]: time="2023-10-02T19:55:11.553153955Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:55:11.553285 env[1043]: time="2023-10-02T19:55:11.553176407Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:55:11.596868 tar[1041]: ./vlan Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619395401Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619467917Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619484989Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619543859Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619567624Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619583904Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619606627Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619624721Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619640550Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619656440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619673442Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619689572Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619858980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:55:11.621947 env[1043]: time="2023-10-02T19:55:11.619948688Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620438457Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620500934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620537422Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620612613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620639694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620658189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620703153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620726026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620747637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620785798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620805646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.620836153Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.621255229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.621285145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622411 env[1043]: time="2023-10-02T19:55:11.621348654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.622784 env[1043]: time="2023-10-02T19:55:11.621374112Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:55:11.622784 env[1043]: time="2023-10-02T19:55:11.621400712Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:55:11.622784 env[1043]: time="2023-10-02T19:55:11.621452599Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:55:11.622784 env[1043]: time="2023-10-02T19:55:11.621485040Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:55:11.622784 env[1043]: time="2023-10-02T19:55:11.621564519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:55:11.623656 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Oct 2 19:55:11.713102 extend-filesystems[1086]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 2 19:55:11.713102 extend-filesystems[1086]: old_desc_blocks = 1, new_desc_blocks = 3 Oct 2 19:55:11.713102 extend-filesystems[1086]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.621904026Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.623030348Z" level=info msg="Connect containerd service" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.623099838Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.715585585Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.715745094Z" level=info msg="Start subscribing containerd event" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.715786833Z" level=info msg="Start recovering state" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.715863977Z" level=info msg="Start event monitor" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.715922878Z" level=info msg="Start snapshots syncer" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.715935772Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.715943446Z" level=info msg="Start streaming server" Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.716424388Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:55:11.725735 env[1043]: time="2023-10-02T19:55:11.716503817Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:55:11.714894 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:55:11.735821 extend-filesystems[1031]: Resized filesystem in /dev/vda9 Oct 2 19:55:11.740179 env[1043]: time="2023-10-02T19:55:11.729198887Z" level=info msg="containerd successfully booted in 0.331920s" Oct 2 19:55:11.715237 systemd[1]: Finished extend-filesystems.service. Oct 2 19:55:11.729399 systemd[1]: Started containerd.service. Oct 2 19:55:11.813217 tar[1041]: ./portmap Oct 2 19:55:11.871307 tar[1041]: ./host-local Oct 2 19:55:11.907653 tar[1041]: ./vrf Oct 2 19:55:11.936717 coreos-metadata[1026]: Oct 02 19:55:11.936 INFO Fetch successful Oct 2 19:55:11.936717 coreos-metadata[1026]: Oct 02 19:55:11.936 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:55:11.946386 tar[1041]: ./bridge Oct 2 19:55:11.950061 coreos-metadata[1026]: Oct 02 19:55:11.950 INFO Fetch successful Oct 2 19:55:11.957402 unknown[1026]: wrote ssh authorized keys file for user: core Oct 2 19:55:11.995932 update-ssh-keys[1095]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:55:11.996364 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:55:12.007703 tar[1041]: ./tuning Oct 2 19:55:12.044215 tar[1041]: ./firewall Oct 2 19:55:12.127500 tar[1041]: ./host-device Oct 2 19:55:12.164206 systemd[1]: Finished prepare-critools.service. Oct 2 19:55:12.181754 tar[1041]: ./sbr Oct 2 19:55:12.216024 tar[1041]: ./loopback Oct 2 19:55:12.249267 tar[1041]: ./dhcp Oct 2 19:55:12.339446 tar[1041]: ./ptp Oct 2 19:55:12.379061 tar[1041]: ./ipvlan Oct 2 19:55:12.415254 tar[1041]: ./bandwidth Oct 2 19:55:12.461997 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:55:12.470011 locksmithd[1087]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:55:13.645850 sshd_keygen[1064]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:55:13.671775 systemd[1]: Finished sshd-keygen.service. Oct 2 19:55:13.673650 systemd[1]: Starting issuegen.service... Oct 2 19:55:13.675084 systemd[1]: Started sshd@0-172.24.4.32:22-172.24.4.1:60208.service. Oct 2 19:55:13.689551 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:55:13.689725 systemd[1]: Finished issuegen.service. Oct 2 19:55:13.691630 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:55:13.701487 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:55:13.703373 systemd[1]: Started getty@tty1.service. Oct 2 19:55:13.705451 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:55:13.706148 systemd[1]: Reached target getty.target. Oct 2 19:55:13.706686 systemd[1]: Reached target multi-user.target. Oct 2 19:55:13.708480 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:55:13.716435 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:55:13.716585 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:55:13.717248 systemd[1]: Startup finished in 1.021s (kernel) + 11.735s (initrd) + 10.178s (userspace) = 22.935s. Oct 2 19:55:14.647725 sshd[1107]: Accepted publickey for core from 172.24.4.1 port 60208 ssh2: RSA SHA256:eMJUoPtRMU1NvNIBGSOXW1dkJdBw8nKNif2sqB5ODJI Oct 2 19:55:14.654035 sshd[1107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:14.686227 systemd-logind[1037]: New session 1 of user core. Oct 2 19:55:14.689698 systemd[1]: Created slice user-500.slice. Oct 2 19:55:14.693388 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:55:14.712985 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:55:14.716535 systemd[1]: Starting user@500.service... Oct 2 19:55:14.742981 (systemd)[1116]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:14.976831 systemd[1116]: Queued start job for default target default.target. Oct 2 19:55:14.977401 systemd[1116]: Reached target paths.target. Oct 2 19:55:14.977422 systemd[1116]: Reached target sockets.target. Oct 2 19:55:14.977438 systemd[1116]: Reached target timers.target. Oct 2 19:55:14.977452 systemd[1116]: Reached target basic.target. Oct 2 19:55:14.977495 systemd[1116]: Reached target default.target. Oct 2 19:55:14.977521 systemd[1116]: Startup finished in 220ms. Oct 2 19:55:14.979345 systemd[1]: Started user@500.service. Oct 2 19:55:14.983142 systemd[1]: Started session-1.scope. Oct 2 19:55:15.369097 systemd[1]: Started sshd@1-172.24.4.32:22-172.24.4.1:36780.service. Oct 2 19:55:17.119820 sshd[1125]: Accepted publickey for core from 172.24.4.1 port 36780 ssh2: RSA SHA256:eMJUoPtRMU1NvNIBGSOXW1dkJdBw8nKNif2sqB5ODJI Oct 2 19:55:17.123995 sshd[1125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:17.135750 systemd-logind[1037]: New session 2 of user core. Oct 2 19:55:17.137347 systemd[1]: Started session-2.scope. Oct 2 19:55:17.767871 sshd[1125]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:17.775191 systemd[1]: Started sshd@2-172.24.4.32:22-172.24.4.1:36788.service. Oct 2 19:55:17.778536 systemd[1]: sshd@1-172.24.4.32:22-172.24.4.1:36780.service: Deactivated successfully. Oct 2 19:55:17.780007 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:55:17.783640 systemd-logind[1037]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:55:17.786001 systemd-logind[1037]: Removed session 2. Oct 2 19:55:19.307715 sshd[1130]: Accepted publickey for core from 172.24.4.1 port 36788 ssh2: RSA SHA256:eMJUoPtRMU1NvNIBGSOXW1dkJdBw8nKNif2sqB5ODJI Oct 2 19:55:19.310327 sshd[1130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:19.320510 systemd-logind[1037]: New session 3 of user core. Oct 2 19:55:19.321245 systemd[1]: Started session-3.scope. Oct 2 19:55:19.955649 sshd[1130]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:19.962682 systemd[1]: Started sshd@3-172.24.4.32:22-172.24.4.1:36802.service. Oct 2 19:55:19.965916 systemd[1]: sshd@2-172.24.4.32:22-172.24.4.1:36788.service: Deactivated successfully. Oct 2 19:55:19.967541 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:55:19.970438 systemd-logind[1037]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:55:19.972892 systemd-logind[1037]: Removed session 3. Oct 2 19:55:21.205029 sshd[1136]: Accepted publickey for core from 172.24.4.1 port 36802 ssh2: RSA SHA256:eMJUoPtRMU1NvNIBGSOXW1dkJdBw8nKNif2sqB5ODJI Oct 2 19:55:21.208494 sshd[1136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:21.218357 systemd-logind[1037]: New session 4 of user core. Oct 2 19:55:21.219039 systemd[1]: Started session-4.scope. Oct 2 19:55:21.853588 sshd[1136]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:21.860356 systemd[1]: Started sshd@4-172.24.4.32:22-172.24.4.1:36808.service. Oct 2 19:55:21.861584 systemd[1]: sshd@3-172.24.4.32:22-172.24.4.1:36802.service: Deactivated successfully. Oct 2 19:55:21.862934 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:55:21.867232 systemd-logind[1037]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:55:21.869901 systemd-logind[1037]: Removed session 4. Oct 2 19:55:23.008402 sshd[1142]: Accepted publickey for core from 172.24.4.1 port 36808 ssh2: RSA SHA256:eMJUoPtRMU1NvNIBGSOXW1dkJdBw8nKNif2sqB5ODJI Oct 2 19:55:23.011226 sshd[1142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:23.022110 systemd-logind[1037]: New session 5 of user core. Oct 2 19:55:23.023115 systemd[1]: Started session-5.scope. Oct 2 19:55:23.484142 sudo[1146]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:55:23.485543 sudo[1146]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:55:23.496356 dbus-daemon[1027]: \xd0-f\xd3zU: received setenforce notice (enforcing=-888665680) Oct 2 19:55:23.500551 sudo[1146]: pam_unix(sudo:session): session closed for user root Oct 2 19:55:23.652732 sshd[1142]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:23.659641 systemd[1]: Started sshd@5-172.24.4.32:22-172.24.4.1:36810.service. Oct 2 19:55:23.661941 systemd[1]: sshd@4-172.24.4.32:22-172.24.4.1:36808.service: Deactivated successfully. Oct 2 19:55:23.663648 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:55:23.667839 systemd-logind[1037]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:55:23.670324 systemd-logind[1037]: Removed session 5. Oct 2 19:55:24.893959 sshd[1149]: Accepted publickey for core from 172.24.4.1 port 36810 ssh2: RSA SHA256:eMJUoPtRMU1NvNIBGSOXW1dkJdBw8nKNif2sqB5ODJI Oct 2 19:55:24.897590 sshd[1149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:24.907963 systemd-logind[1037]: New session 6 of user core. Oct 2 19:55:24.908443 systemd[1]: Started session-6.scope. Oct 2 19:55:25.342603 sudo[1154]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:55:25.344238 sudo[1154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:55:25.350434 sudo[1154]: pam_unix(sudo:session): session closed for user root Oct 2 19:55:25.360626 sudo[1153]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:55:25.361166 sudo[1153]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:55:25.382313 systemd[1]: Stopping audit-rules.service... Oct 2 19:55:25.383000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:55:25.384579 auditctl[1157]: No rules Oct 2 19:55:25.386412 kernel: kauditd_printk_skb: 164 callbacks suppressed Oct 2 19:55:25.386548 kernel: audit: type=1305 audit(1696276525.383:166): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:55:25.387121 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:55:25.387488 systemd[1]: Stopped audit-rules.service. Oct 2 19:55:25.383000 audit[1157]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd62b77570 a2=420 a3=0 items=0 ppid=1 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:25.393195 systemd[1]: Starting audit-rules.service... Oct 2 19:55:25.402220 kernel: audit: type=1300 audit(1696276525.383:166): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd62b77570 a2=420 a3=0 items=0 ppid=1 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:25.383000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:55:25.406213 kernel: audit: type=1327 audit(1696276525.383:166): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:55:25.406314 kernel: audit: type=1131 audit(1696276525.387:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.443798 augenrules[1174]: No rules Oct 2 19:55:25.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.445317 systemd[1]: Finished audit-rules.service. Oct 2 19:55:25.446860 sudo[1153]: pam_unix(sudo:session): session closed for user root Oct 2 19:55:25.446000 audit[1153]: USER_END pid=1153 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.466703 kernel: audit: type=1130 audit(1696276525.445:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.466832 kernel: audit: type=1106 audit(1696276525.446:169): pid=1153 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.466889 kernel: audit: type=1104 audit(1696276525.447:170): pid=1153 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.447000 audit[1153]: CRED_DISP pid=1153 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.620198 sshd[1149]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:25.624000 audit[1149]: USER_END pid=1149 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:25.631279 systemd[1]: Started sshd@6-172.24.4.32:22-172.24.4.1:52130.service. Oct 2 19:55:25.632533 systemd[1]: sshd@5-172.24.4.32:22-172.24.4.1:36810.service: Deactivated successfully. Oct 2 19:55:25.634078 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:55:25.641129 kernel: audit: type=1106 audit(1696276525.624:171): pid=1149 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:25.641250 kernel: audit: type=1104 audit(1696276525.625:172): pid=1149 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:25.625000 audit[1149]: CRED_DISP pid=1149 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:25.647966 systemd-logind[1037]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:55:25.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.32:22-172.24.4.1:52130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.650493 systemd-logind[1037]: Removed session 6. Oct 2 19:55:25.659956 kernel: audit: type=1130 audit(1696276525.629:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.32:22-172.24.4.1:52130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:25.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.24.4.32:22-172.24.4.1:36810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:26.813000 audit[1179]: USER_ACCT pid=1179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:26.814328 sshd[1179]: Accepted publickey for core from 172.24.4.1 port 52130 ssh2: RSA SHA256:eMJUoPtRMU1NvNIBGSOXW1dkJdBw8nKNif2sqB5ODJI Oct 2 19:55:26.815000 audit[1179]: CRED_ACQ pid=1179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:26.816000 audit[1179]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5b7c5570 a2=3 a3=0 items=0 ppid=1 pid=1179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:26.816000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:55:26.817203 sshd[1179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:55:26.827116 systemd-logind[1037]: New session 7 of user core. Oct 2 19:55:26.827781 systemd[1]: Started session-7.scope. Oct 2 19:55:26.840000 audit[1179]: USER_START pid=1179 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:26.844000 audit[1182]: CRED_ACQ pid=1182 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:27.260000 audit[1183]: USER_ACCT pid=1183 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:27.261238 sudo[1183]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:55:27.261000 audit[1183]: CRED_REFR pid=1183 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:27.262528 sudo[1183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:55:27.266000 audit[1183]: USER_START pid=1183 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:27.914370 systemd[1]: Reloading. Oct 2 19:55:28.069131 /usr/lib/systemd/system-generators/torcx-generator[1215]: time="2023-10-02T19:55:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:55:28.069161 /usr/lib/systemd/system-generators/torcx-generator[1215]: time="2023-10-02T19:55:28Z" level=info msg="torcx already run" Oct 2 19:55:28.150601 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:55:28.150624 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:55:28.173003 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:55:28.248000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.248000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.248000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit: BPF prog-id=34 op=LOAD Oct 2 19:55:28.249000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit: BPF prog-id=35 op=LOAD Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.249000 audit: BPF prog-id=36 op=LOAD Oct 2 19:55:28.249000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:55:28.249000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit: BPF prog-id=37 op=LOAD Oct 2 19:55:28.251000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit: BPF prog-id=38 op=LOAD Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.251000 audit: BPF prog-id=39 op=LOAD Oct 2 19:55:28.251000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:55:28.251000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.252000 audit: BPF prog-id=40 op=LOAD Oct 2 19:55:28.252000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit: BPF prog-id=41 op=LOAD Oct 2 19:55:28.254000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit: BPF prog-id=42 op=LOAD Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.254000 audit: BPF prog-id=43 op=LOAD Oct 2 19:55:28.254000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:55:28.254000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.255000 audit: BPF prog-id=44 op=LOAD Oct 2 19:55:28.255000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.257000 audit: BPF prog-id=45 op=LOAD Oct 2 19:55:28.257000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:55:28.257000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit: BPF prog-id=46 op=LOAD Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit: BPF prog-id=47 op=LOAD Oct 2 19:55:28.258000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:55:28.258000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.258000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.259000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:28.259000 audit: BPF prog-id=48 op=LOAD Oct 2 19:55:28.259000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:55:28.276633 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:55:28.284600 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:55:28.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:28.285200 systemd[1]: Reached target network-online.target. Oct 2 19:55:28.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:28.287162 systemd[1]: Started kubelet.service. Oct 2 19:55:28.304598 systemd[1]: Starting coreos-metadata.service... Oct 2 19:55:28.358533 coreos-metadata[1266]: Oct 02 19:55:28.358 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 2 19:55:28.372400 kubelet[1259]: E1002 19:55:28.372355 1259 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:55:28.374462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:55:28.374595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:55:28.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:55:28.575179 coreos-metadata[1266]: Oct 02 19:55:28.574 INFO Fetch successful Oct 2 19:55:28.575179 coreos-metadata[1266]: Oct 02 19:55:28.574 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 2 19:55:28.590578 coreos-metadata[1266]: Oct 02 19:55:28.590 INFO Fetch successful Oct 2 19:55:28.590578 coreos-metadata[1266]: Oct 02 19:55:28.590 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 2 19:55:28.608384 coreos-metadata[1266]: Oct 02 19:55:28.608 INFO Fetch successful Oct 2 19:55:28.608384 coreos-metadata[1266]: Oct 02 19:55:28.608 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 2 19:55:28.620931 coreos-metadata[1266]: Oct 02 19:55:28.620 INFO Fetch successful Oct 2 19:55:28.621303 coreos-metadata[1266]: Oct 02 19:55:28.621 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 2 19:55:28.634521 coreos-metadata[1266]: Oct 02 19:55:28.634 INFO Fetch successful Oct 2 19:55:28.650340 systemd[1]: Finished coreos-metadata.service. Oct 2 19:55:28.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:29.366215 systemd[1]: Stopped kubelet.service. Oct 2 19:55:29.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:29.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:29.387850 systemd[1]: Reloading. Oct 2 19:55:29.514202 /usr/lib/systemd/system-generators/torcx-generator[1323]: time="2023-10-02T19:55:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:55:29.514721 /usr/lib/systemd/system-generators/torcx-generator[1323]: time="2023-10-02T19:55:29Z" level=info msg="torcx already run" Oct 2 19:55:29.593740 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:55:29.593762 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:55:29.616095 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.690000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit: BPF prog-id=49 op=LOAD Oct 2 19:55:29.691000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit: BPF prog-id=50 op=LOAD Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.691000 audit: BPF prog-id=51 op=LOAD Oct 2 19:55:29.691000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:55:29.691000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit: BPF prog-id=52 op=LOAD Oct 2 19:55:29.692000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.692000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit: BPF prog-id=53 op=LOAD Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.693000 audit: BPF prog-id=54 op=LOAD Oct 2 19:55:29.693000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:55:29.693000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.694000 audit: BPF prog-id=55 op=LOAD Oct 2 19:55:29.694000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit: BPF prog-id=56 op=LOAD Oct 2 19:55:29.695000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.695000 audit: BPF prog-id=57 op=LOAD Oct 2 19:55:29.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit: BPF prog-id=58 op=LOAD Oct 2 19:55:29.696000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:55:29.696000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.696000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.697000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.697000 audit: BPF prog-id=59 op=LOAD Oct 2 19:55:29.697000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.698000 audit: BPF prog-id=60 op=LOAD Oct 2 19:55:29.698000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit: BPF prog-id=61 op=LOAD Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit: BPF prog-id=62 op=LOAD Oct 2 19:55:29.699000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:55:29.699000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.699000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.700000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:29.700000 audit: BPF prog-id=63 op=LOAD Oct 2 19:55:29.700000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:55:29.728078 systemd[1]: Started kubelet.service. Oct 2 19:55:29.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:29.800818 kubelet[1369]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:55:29.800818 kubelet[1369]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:55:29.800818 kubelet[1369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:55:29.801434 kubelet[1369]: I1002 19:55:29.800829 1369 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:55:29.802369 kubelet[1369]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:55:29.802369 kubelet[1369]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:55:29.802369 kubelet[1369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:55:30.539745 kubelet[1369]: I1002 19:55:30.539700 1369 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:55:30.540008 kubelet[1369]: I1002 19:55:30.539982 1369 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:55:30.540803 kubelet[1369]: I1002 19:55:30.540770 1369 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:55:30.548375 kubelet[1369]: I1002 19:55:30.548329 1369 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:55:30.553874 kubelet[1369]: I1002 19:55:30.553839 1369 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:55:30.554456 kubelet[1369]: I1002 19:55:30.554427 1369 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:55:30.554732 kubelet[1369]: I1002 19:55:30.554706 1369 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:55:30.555033 kubelet[1369]: I1002 19:55:30.555004 1369 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:55:30.555244 kubelet[1369]: I1002 19:55:30.555221 1369 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:55:30.555553 kubelet[1369]: I1002 19:55:30.555524 1369 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:55:30.563205 kubelet[1369]: I1002 19:55:30.563163 1369 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:55:30.563205 kubelet[1369]: I1002 19:55:30.563187 1369 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:55:30.563205 kubelet[1369]: I1002 19:55:30.563204 1369 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:55:30.563205 kubelet[1369]: I1002 19:55:30.563216 1369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:55:30.563803 kubelet[1369]: E1002 19:55:30.563764 1369 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:30.563803 kubelet[1369]: E1002 19:55:30.563807 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:30.564428 kubelet[1369]: I1002 19:55:30.564394 1369 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:55:30.564671 kubelet[1369]: W1002 19:55:30.564635 1369 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:55:30.565114 kubelet[1369]: I1002 19:55:30.565020 1369 server.go:1175] "Started kubelet" Oct 2 19:55:30.567762 kubelet[1369]: E1002 19:55:30.567371 1369 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:55:30.567762 kubelet[1369]: E1002 19:55:30.567397 1369 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:55:30.567958 kubelet[1369]: I1002 19:55:30.567840 1369 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:55:30.568534 kubelet[1369]: I1002 19:55:30.568473 1369 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:55:30.580549 kernel: kauditd_printk_skb: 362 callbacks suppressed Oct 2 19:55:30.580659 kernel: audit: type=1400 audit(1696276530.571:534): avc: denied { mac_admin } for pid=1369 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:30.571000 audit[1369]: AVC avc: denied { mac_admin } for pid=1369 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:30.580774 kubelet[1369]: I1002 19:55:30.572127 1369 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:55:30.580774 kubelet[1369]: I1002 19:55:30.572172 1369 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:55:30.580774 kubelet[1369]: I1002 19:55:30.572249 1369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:55:30.580774 kubelet[1369]: I1002 19:55:30.580116 1369 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:55:30.580774 kubelet[1369]: I1002 19:55:30.580190 1369 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:55:30.571000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:55:30.587280 kernel: audit: type=1401 audit(1696276530.571:534): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:55:30.571000 audit[1369]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00094ef30 a1=c000ae8018 a2=c00094ef00 a3=25 items=0 ppid=1 pid=1369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.588164 kernel: audit: type=1300 audit(1696276530.571:534): arch=c000003e syscall=188 success=no exit=-22 a0=c00094ef30 a1=c000ae8018 a2=c00094ef00 a3=25 items=0 ppid=1 pid=1369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.589581 kubelet[1369]: E1002 19:55:30.589567 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:30.571000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:55:30.612550 kubelet[1369]: E1002 19:55:30.612522 1369 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.24.4.32" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:30.612846 kubelet[1369]: E1002 19:55:30.612746 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a62846d28b003", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 565001219, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 565001219, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.613186 kubelet[1369]: W1002 19:55:30.613171 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.24.4.32" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:30.613268 kernel: audit: type=1327 audit(1696276530.571:534): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:55:30.613366 kubelet[1369]: E1002 19:55:30.613349 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.32" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:30.613482 kubelet[1369]: W1002 19:55:30.613468 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:30.613552 kubelet[1369]: E1002 19:55:30.613542 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:30.613652 kubelet[1369]: W1002 19:55:30.613639 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:30.613735 kubelet[1369]: E1002 19:55:30.613720 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:30.571000 audit[1369]: AVC avc: denied { mac_admin } for pid=1369 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:30.571000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:55:30.621413 kubelet[1369]: E1002 19:55:30.621310 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a62846d4d17d6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 567387094, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 567387094, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.622524 kernel: audit: type=1400 audit(1696276530.571:535): avc: denied { mac_admin } for pid=1369 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:30.622601 kernel: audit: type=1401 audit(1696276530.571:535): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:55:30.571000 audit[1369]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00086c7a0 a1=c000ae8030 a2=c00094efc0 a3=25 items=0 ppid=1 pid=1369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.629211 kubelet[1369]: I1002 19:55:30.629194 1369 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:55:30.629344 kubelet[1369]: I1002 19:55:30.629333 1369 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:55:30.629432 kubelet[1369]: I1002 19:55:30.629423 1369 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:55:30.630115 kernel: audit: type=1300 audit(1696276530.571:535): arch=c000003e syscall=188 success=no exit=-22 a0=c00086c7a0 a1=c000ae8030 a2=c00094efc0 a3=25 items=0 ppid=1 pid=1369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.571000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:55:30.636659 kernel: audit: type=1327 audit(1696276530.571:535): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:55:30.641607 kubelet[1369]: E1002 19:55:30.641530 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efc1af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.32 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628379055, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628379055, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.642946 kubelet[1369]: I1002 19:55:30.642931 1369 policy_none.go:49] "None policy: Start" Oct 2 19:55:30.644379 kubelet[1369]: E1002 19:55:30.644240 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efd9de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.32 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628385246, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628385246, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.644662 kubelet[1369]: I1002 19:55:30.644369 1369 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:55:30.644755 kubelet[1369]: I1002 19:55:30.644743 1369 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:55:30.646170 kubelet[1369]: E1002 19:55:30.646068 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efe6c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.32 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628388552, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628388552, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.651824 systemd[1]: Created slice kubepods.slice. Oct 2 19:55:30.655910 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:55:30.658642 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:55:30.663543 kubelet[1369]: I1002 19:55:30.663527 1369 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:55:30.663000 audit[1369]: AVC avc: denied { mac_admin } for pid=1369 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:30.663793 kubelet[1369]: I1002 19:55:30.663780 1369 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:55:30.664062 kubelet[1369]: I1002 19:55:30.664035 1369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:55:30.665650 kubelet[1369]: E1002 19:55:30.665623 1369 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.32\" not found" Oct 2 19:55:30.663000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:55:30.671514 kernel: audit: type=1400 audit(1696276530.663:536): avc: denied { mac_admin } for pid=1369 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:30.671564 kernel: audit: type=1401 audit(1696276530.663:536): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:55:30.663000 audit[1369]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00005b7d0 a1=c000cc1218 a2=c00005b770 a3=25 items=0 ppid=1 pid=1369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.663000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:55:30.677576 kubelet[1369]: E1002 19:55:30.677490 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628473bc41bc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 675335612, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 675335612, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.678000 audit[1386]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1386 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.678000 audit[1386]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdc263fea0 a2=0 a3=7ffdc263fe8c items=0 ppid=1369 pid=1386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:55:30.679000 audit[1390]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.679000 audit[1390]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc2cee5b70 a2=0 a3=7ffc2cee5b5c items=0 ppid=1369 pid=1390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:55:30.680942 kubelet[1369]: I1002 19:55:30.680927 1369 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.32" Oct 2 19:55:30.681918 kubelet[1369]: E1002 19:55:30.681906 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:30.683128 kubelet[1369]: E1002 19:55:30.683114 1369 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.32" Oct 2 19:55:30.684108 kubelet[1369]: E1002 19:55:30.684017 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efc1af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.32 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628379055, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 680896862, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efc1af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.685588 kubelet[1369]: E1002 19:55:30.685536 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efd9de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.32 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628385246, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 680901961, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efd9de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.687284 kubelet[1369]: E1002 19:55:30.687239 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efe6c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.32 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628388552, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 680904576, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efe6c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.684000 audit[1392]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1392 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.684000 audit[1392]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc38d42260 a2=0 a3=7ffc38d4224c items=0 ppid=1369 pid=1392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:55:30.699000 audit[1397]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.699000 audit[1397]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdf9355480 a2=0 a3=7ffdf935546c items=0 ppid=1369 pid=1397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:55:30.750000 audit[1402]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.750000 audit[1402]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffd0622d30 a2=0 a3=7fffd0622d1c items=0 ppid=1369 pid=1402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:55:30.751000 audit[1403]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.751000 audit[1403]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd1f99f730 a2=0 a3=7ffd1f99f71c items=0 ppid=1369 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:55:30.758000 audit[1406]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.758000 audit[1406]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffda553a530 a2=0 a3=7ffda553a51c items=0 ppid=1369 pid=1406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:55:30.762000 audit[1409]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.762000 audit[1409]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe73d60490 a2=0 a3=7ffe73d6047c items=0 ppid=1369 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:55:30.763000 audit[1410]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.763000 audit[1410]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff9aee8220 a2=0 a3=7fff9aee820c items=0 ppid=1369 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:55:30.764000 audit[1411]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.764000 audit[1411]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb8abb3f0 a2=0 a3=7fffb8abb3dc items=0 ppid=1369 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:55:30.766000 audit[1413]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.766000 audit[1413]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffa42595c0 a2=0 a3=7fffa42595ac items=0 ppid=1369 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:55:30.782587 kubelet[1369]: E1002 19:55:30.782562 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:30.769000 audit[1415]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.769000 audit[1415]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe6fb582a0 a2=0 a3=7ffe6fb5828c items=0 ppid=1369 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:55:30.798000 audit[1418]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.798000 audit[1418]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd01183c50 a2=0 a3=7ffd01183c3c items=0 ppid=1369 pid=1418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:55:30.803000 audit[1420]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.803000 audit[1420]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffffed6cd70 a2=0 a3=7ffffed6cd5c items=0 ppid=1369 pid=1420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:55:30.814000 audit[1423]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.814339 kubelet[1369]: E1002 19:55:30.814293 1369 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.24.4.32" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:30.814000 audit[1423]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe85639330 a2=0 a3=7ffe8563931c items=0 ppid=1369 pid=1423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:55:30.815589 kubelet[1369]: I1002 19:55:30.815557 1369 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:55:30.816000 audit[1424]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.816000 audit[1424]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe842317d0 a2=0 a3=7ffe842317bc items=0 ppid=1369 pid=1424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:55:30.816000 audit[1425]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.816000 audit[1425]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7282bd70 a2=0 a3=7ffe7282bd5c items=0 ppid=1369 pid=1425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:55:30.817000 audit[1426]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1426 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.817000 audit[1426]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffddcde9d20 a2=0 a3=7ffddcde9d0c items=0 ppid=1369 pid=1426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:55:30.818000 audit[1427]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.818000 audit[1427]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde2be4820 a2=0 a3=7ffde2be480c items=0 ppid=1369 pid=1427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:55:30.819000 audit[1429]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:55:30.819000 audit[1429]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb6b6d250 a2=0 a3=7fffb6b6d23c items=0 ppid=1369 pid=1429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:55:30.820000 audit[1430]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.820000 audit[1430]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffc4efd4e0 a2=0 a3=7fffc4efd4cc items=0 ppid=1369 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:55:30.821000 audit[1431]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.821000 audit[1431]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc35006b20 a2=0 a3=7ffc35006b0c items=0 ppid=1369 pid=1431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:55:30.823000 audit[1433]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.823000 audit[1433]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe5946d7c0 a2=0 a3=7ffe5946d7ac items=0 ppid=1369 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.823000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:55:30.824000 audit[1434]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.824000 audit[1434]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeaa3713b0 a2=0 a3=7ffeaa37139c items=0 ppid=1369 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:55:30.825000 audit[1435]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.825000 audit[1435]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd94fb850 a2=0 a3=7fffd94fb83c items=0 ppid=1369 pid=1435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:55:30.827000 audit[1437]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.827000 audit[1437]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff1436f270 a2=0 a3=7fff1436f25c items=0 ppid=1369 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.827000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:55:30.829000 audit[1439]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.829000 audit[1439]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe9a5aeca0 a2=0 a3=7ffe9a5aec8c items=0 ppid=1369 pid=1439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.829000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:55:30.832000 audit[1441]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.832000 audit[1441]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fffb2b13d20 a2=0 a3=7fffb2b13d0c items=0 ppid=1369 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.832000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:55:30.836000 audit[1443]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.836000 audit[1443]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd8be2d270 a2=0 a3=7ffd8be2d25c items=0 ppid=1369 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:55:30.840000 audit[1445]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.840000 audit[1445]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffe3e3fb4e0 a2=0 a3=7ffe3e3fb4cc items=0 ppid=1369 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.840000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:55:30.841069 kubelet[1369]: I1002 19:55:30.841036 1369 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:55:30.841171 kubelet[1369]: I1002 19:55:30.841160 1369 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:55:30.841257 kubelet[1369]: I1002 19:55:30.841236 1369 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:55:30.841369 kubelet[1369]: E1002 19:55:30.841358 1369 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:55:30.842000 audit[1446]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.842000 audit[1446]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5a450d00 a2=0 a3=7fff5a450cec items=0 ppid=1369 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:55:30.843226 kubelet[1369]: W1002 19:55:30.843196 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:30.843279 kubelet[1369]: E1002 19:55:30.843229 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:30.843000 audit[1447]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.843000 audit[1447]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9b8fd320 a2=0 a3=7fff9b8fd30c items=0 ppid=1369 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:55:30.844000 audit[1448]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:55:30.844000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3a4c9110 a2=0 a3=7ffc3a4c90fc items=0 ppid=1369 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:30.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:55:30.883791 kubelet[1369]: E1002 19:55:30.883761 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:30.884599 kubelet[1369]: I1002 19:55:30.884576 1369 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.32" Oct 2 19:55:30.887504 kubelet[1369]: E1002 19:55:30.887460 1369 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.32" Oct 2 19:55:30.887648 kubelet[1369]: E1002 19:55:30.887510 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efc1af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.32 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628379055, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 884534083, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efc1af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.889872 kubelet[1369]: E1002 19:55:30.889802 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efd9de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.32 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628385246, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 884542018, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efd9de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.969564 kubelet[1369]: E1002 19:55:30.969413 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efe6c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.32 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628388552, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 884547078, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efe6c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:30.984570 kubelet[1369]: E1002 19:55:30.984534 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.086822 kubelet[1369]: E1002 19:55:31.085355 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.187139 kubelet[1369]: E1002 19:55:31.187114 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.215930 kubelet[1369]: E1002 19:55:31.215876 1369 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.24.4.32" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:31.287479 kubelet[1369]: E1002 19:55:31.287446 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.288686 kubelet[1369]: I1002 19:55:31.288649 1369 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.32" Oct 2 19:55:31.290655 kubelet[1369]: E1002 19:55:31.290570 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efc1af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.32 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628379055, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 31, 288557439, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efc1af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:31.291255 kubelet[1369]: E1002 19:55:31.291239 1369 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.32" Oct 2 19:55:31.369350 kubelet[1369]: E1002 19:55:31.368975 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efd9de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.32 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628385246, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 31, 288594859, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efd9de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:31.388618 kubelet[1369]: E1002 19:55:31.388550 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.489681 kubelet[1369]: E1002 19:55:31.489637 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.524239 kubelet[1369]: W1002 19:55:31.524203 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:31.524514 kubelet[1369]: E1002 19:55:31.524461 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:31.564507 kubelet[1369]: E1002 19:55:31.564476 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:31.568466 kubelet[1369]: E1002 19:55:31.568343 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efe6c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.32 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628388552, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 31, 288601592, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efe6c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:31.590579 kubelet[1369]: E1002 19:55:31.590524 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.691623 kubelet[1369]: E1002 19:55:31.691501 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.792880 kubelet[1369]: E1002 19:55:31.792828 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.893422 kubelet[1369]: E1002 19:55:31.893356 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:31.994856 kubelet[1369]: E1002 19:55:31.994724 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:32.019243 kubelet[1369]: E1002 19:55:32.019209 1369 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.24.4.32" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:32.093537 kubelet[1369]: I1002 19:55:32.093505 1369 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.32" Oct 2 19:55:32.095487 kubelet[1369]: E1002 19:55:32.095436 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:32.095922 kubelet[1369]: E1002 19:55:32.095878 1369 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.32" Oct 2 19:55:32.096294 kubelet[1369]: E1002 19:55:32.096173 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efc1af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.32 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628379055, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 32, 93436449, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efc1af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:32.099605 kubelet[1369]: E1002 19:55:32.099430 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efd9de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.32 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628385246, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 32, 93456948, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efd9de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:32.129349 kubelet[1369]: W1002 19:55:32.129311 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:32.129349 kubelet[1369]: E1002 19:55:32.129362 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:32.140335 kubelet[1369]: W1002 19:55:32.140283 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:32.140335 kubelet[1369]: E1002 19:55:32.140332 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:32.163440 kubelet[1369]: W1002 19:55:32.163402 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.24.4.32" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:32.163440 kubelet[1369]: E1002 19:55:32.163442 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.32" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:32.168285 kubelet[1369]: E1002 19:55:32.168167 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efe6c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.32 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628388552, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 32, 93462939, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efe6c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:32.196103 kubelet[1369]: E1002 19:55:32.196035 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:32.297030 kubelet[1369]: E1002 19:55:32.296831 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:32.398081 kubelet[1369]: E1002 19:55:32.397929 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:32.498962 kubelet[1369]: E1002 19:55:32.498835 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:32.565407 kubelet[1369]: E1002 19:55:32.565357 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:32.599468 kubelet[1369]: E1002 19:55:32.599423 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:32.700416 kubelet[1369]: E1002 19:55:32.700383 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:32.801658 kubelet[1369]: E1002 19:55:32.801604 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:32.902543 kubelet[1369]: E1002 19:55:32.902317 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.003462 kubelet[1369]: E1002 19:55:33.003316 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.103640 kubelet[1369]: E1002 19:55:33.103494 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.204716 kubelet[1369]: E1002 19:55:33.204490 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.305747 kubelet[1369]: E1002 19:55:33.305601 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.406813 kubelet[1369]: E1002 19:55:33.406671 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.507941 kubelet[1369]: E1002 19:55:33.507782 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.566531 kubelet[1369]: E1002 19:55:33.566479 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:33.608714 kubelet[1369]: E1002 19:55:33.608584 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.623180 kubelet[1369]: E1002 19:55:33.623124 1369 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.24.4.32" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:33.682411 kubelet[1369]: W1002 19:55:33.682362 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:33.682723 kubelet[1369]: E1002 19:55:33.682688 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:33.697806 kubelet[1369]: I1002 19:55:33.697735 1369 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.32" Oct 2 19:55:33.699474 kubelet[1369]: E1002 19:55:33.699434 1369 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.32" Oct 2 19:55:33.700189 kubelet[1369]: E1002 19:55:33.699977 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efc1af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.32 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628379055, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 33, 697670490, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efc1af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:33.702127 kubelet[1369]: E1002 19:55:33.701946 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efd9de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.32 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628385246, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 33, 697679637, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efd9de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:33.704765 kubelet[1369]: E1002 19:55:33.704645 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efe6c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.32 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628388552, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 33, 697690077, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efe6c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:33.708763 kubelet[1369]: E1002 19:55:33.708705 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.809858 kubelet[1369]: E1002 19:55:33.809702 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:33.863596 kubelet[1369]: W1002 19:55:33.863503 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.24.4.32" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:33.863596 kubelet[1369]: E1002 19:55:33.863575 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.32" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:33.910940 kubelet[1369]: E1002 19:55:33.910683 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.011895 kubelet[1369]: E1002 19:55:34.011624 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.112100 kubelet[1369]: E1002 19:55:34.111902 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.154942 kubelet[1369]: W1002 19:55:34.154836 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:34.154942 kubelet[1369]: E1002 19:55:34.154900 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:34.212968 kubelet[1369]: E1002 19:55:34.212736 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.314023 kubelet[1369]: E1002 19:55:34.313909 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.415003 kubelet[1369]: E1002 19:55:34.414662 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.515551 kubelet[1369]: E1002 19:55:34.515427 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.566902 kubelet[1369]: E1002 19:55:34.566859 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:34.615845 kubelet[1369]: E1002 19:55:34.615772 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.716507 kubelet[1369]: E1002 19:55:34.715900 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.794879 kubelet[1369]: W1002 19:55:34.794828 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:34.795216 kubelet[1369]: E1002 19:55:34.795185 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:34.816906 kubelet[1369]: E1002 19:55:34.816836 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:34.917867 kubelet[1369]: E1002 19:55:34.917838 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.019231 kubelet[1369]: E1002 19:55:35.019032 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.119566 kubelet[1369]: E1002 19:55:35.119504 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.220722 kubelet[1369]: E1002 19:55:35.220652 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.321722 kubelet[1369]: E1002 19:55:35.321676 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.423080 kubelet[1369]: E1002 19:55:35.422982 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.524124 kubelet[1369]: E1002 19:55:35.524075 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.567732 kubelet[1369]: E1002 19:55:35.567673 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:35.625107 kubelet[1369]: E1002 19:55:35.624921 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.665364 kubelet[1369]: E1002 19:55:35.665322 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:35.725754 kubelet[1369]: E1002 19:55:35.725694 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.826709 kubelet[1369]: E1002 19:55:35.826640 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:35.926962 kubelet[1369]: E1002 19:55:35.926764 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.027685 kubelet[1369]: E1002 19:55:36.027566 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.128774 kubelet[1369]: E1002 19:55:36.128647 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.229768 kubelet[1369]: E1002 19:55:36.229550 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.330600 kubelet[1369]: E1002 19:55:36.330465 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.431556 kubelet[1369]: E1002 19:55:36.431409 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.532550 kubelet[1369]: E1002 19:55:36.532326 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.567982 kubelet[1369]: E1002 19:55:36.567859 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:36.633523 kubelet[1369]: E1002 19:55:36.633424 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.734809 kubelet[1369]: E1002 19:55:36.734725 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.825305 kubelet[1369]: E1002 19:55:36.825242 1369 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.24.4.32" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:55:36.835385 kubelet[1369]: E1002 19:55:36.835347 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:36.901433 kubelet[1369]: I1002 19:55:36.901369 1369 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.32" Oct 2 19:55:36.904037 kubelet[1369]: E1002 19:55:36.903975 1369 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.32" Oct 2 19:55:36.904260 kubelet[1369]: E1002 19:55:36.903978 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efc1af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.32 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628379055, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 36, 901012768, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efc1af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:36.905876 kubelet[1369]: E1002 19:55:36.905748 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efd9de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.32 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628385246, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 36, 901025922, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efd9de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:36.907397 kubelet[1369]: E1002 19:55:36.907275 1369 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.32.178a628470efe6c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.32", UID:"172.24.4.32", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.32 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.32"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 55, 30, 628388552, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 55, 36, 901032024, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.32.178a628470efe6c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:55:36.935784 kubelet[1369]: E1002 19:55:36.935723 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.036891 kubelet[1369]: E1002 19:55:37.036846 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.137922 kubelet[1369]: E1002 19:55:37.137791 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.237940 kubelet[1369]: E1002 19:55:37.237865 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.339244 kubelet[1369]: E1002 19:55:37.339155 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.440342 kubelet[1369]: E1002 19:55:37.440217 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.541296 kubelet[1369]: E1002 19:55:37.541202 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.568078 kubelet[1369]: E1002 19:55:37.567979 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:37.641607 kubelet[1369]: E1002 19:55:37.641464 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.741985 kubelet[1369]: E1002 19:55:37.741702 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.823515 kubelet[1369]: W1002 19:55:37.823411 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.24.4.32" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:37.823515 kubelet[1369]: E1002 19:55:37.823521 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.32" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:55:37.841941 kubelet[1369]: E1002 19:55:37.841861 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:37.942904 kubelet[1369]: E1002 19:55:37.942749 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.043898 kubelet[1369]: E1002 19:55:38.043672 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.144950 kubelet[1369]: E1002 19:55:38.144809 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.178144 kubelet[1369]: W1002 19:55:38.178039 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:38.178144 kubelet[1369]: E1002 19:55:38.178143 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:55:38.245996 kubelet[1369]: E1002 19:55:38.245912 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.249438 kubelet[1369]: W1002 19:55:38.249399 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:38.249656 kubelet[1369]: E1002 19:55:38.249632 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:55:38.346172 kubelet[1369]: E1002 19:55:38.346112 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.446967 kubelet[1369]: E1002 19:55:38.446901 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.547850 kubelet[1369]: E1002 19:55:38.547783 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.568199 kubelet[1369]: E1002 19:55:38.568156 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:38.649083 kubelet[1369]: E1002 19:55:38.648925 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.749979 kubelet[1369]: E1002 19:55:38.749835 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.850758 kubelet[1369]: E1002 19:55:38.850694 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:38.950993 kubelet[1369]: E1002 19:55:38.950809 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.051014 kubelet[1369]: E1002 19:55:39.050965 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.151963 kubelet[1369]: E1002 19:55:39.151898 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.252993 kubelet[1369]: E1002 19:55:39.252778 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.353878 kubelet[1369]: E1002 19:55:39.353745 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.454840 kubelet[1369]: E1002 19:55:39.454708 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.467327 kubelet[1369]: W1002 19:55:39.467258 1369 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:39.467494 kubelet[1369]: E1002 19:55:39.467352 1369 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:55:39.555559 kubelet[1369]: E1002 19:55:39.555442 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.568909 kubelet[1369]: E1002 19:55:39.568807 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:39.656191 kubelet[1369]: E1002 19:55:39.656130 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.756681 kubelet[1369]: E1002 19:55:39.756625 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.857645 kubelet[1369]: E1002 19:55:39.857600 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:39.958381 kubelet[1369]: E1002 19:55:39.958222 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:40.059556 kubelet[1369]: E1002 19:55:40.059413 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:40.160499 kubelet[1369]: E1002 19:55:40.160296 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:40.260564 kubelet[1369]: E1002 19:55:40.260422 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:40.361560 kubelet[1369]: E1002 19:55:40.361299 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:40.462406 kubelet[1369]: E1002 19:55:40.462196 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:40.548534 kubelet[1369]: I1002 19:55:40.548292 1369 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:55:40.562916 kubelet[1369]: E1002 19:55:40.562805 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:40.569308 kubelet[1369]: E1002 19:55:40.569113 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:40.663204 kubelet[1369]: E1002 19:55:40.663157 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:40.666525 kubelet[1369]: E1002 19:55:40.666493 1369 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.32\" not found" Oct 2 19:55:40.667455 kubelet[1369]: E1002 19:55:40.667421 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:40.764264 kubelet[1369]: E1002 19:55:40.764102 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:40.865403 kubelet[1369]: E1002 19:55:40.865366 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:41.415128 systemd-resolved[999]: Clock change detected. Flushing caches. Oct 2 19:55:41.416285 systemd-timesyncd[1000]: Contacted time server 212.83.158.83:123 (2.flatcar.pool.ntp.org). Oct 2 19:55:41.416880 systemd-timesyncd[1000]: Initial clock synchronization to Mon 2023-10-02 19:55:41.415010 UTC. Oct 2 19:55:41.513827 kubelet[1369]: E1002 19:55:41.513768 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:41.548144 kubelet[1369]: E1002 19:55:41.548104 1369 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.32" not found Oct 2 19:55:41.614585 kubelet[1369]: E1002 19:55:41.614285 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:41.715457 kubelet[1369]: E1002 19:55:41.715227 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:41.816330 kubelet[1369]: E1002 19:55:41.816268 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:41.917179 kubelet[1369]: E1002 19:55:41.917021 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.017327 kubelet[1369]: E1002 19:55:42.017183 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.117505 kubelet[1369]: E1002 19:55:42.117262 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.117505 kubelet[1369]: E1002 19:55:42.117353 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:42.218464 kubelet[1369]: E1002 19:55:42.218324 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.319350 kubelet[1369]: E1002 19:55:42.319307 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.420320 kubelet[1369]: E1002 19:55:42.420249 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.521177 kubelet[1369]: E1002 19:55:42.520968 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.566775 kubelet[1369]: E1002 19:55:42.566702 1369 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.32" not found Oct 2 19:55:42.621823 kubelet[1369]: E1002 19:55:42.621764 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.722216 kubelet[1369]: E1002 19:55:42.722149 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.823535 kubelet[1369]: E1002 19:55:42.823333 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:42.924271 kubelet[1369]: E1002 19:55:42.924134 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.024307 kubelet[1369]: E1002 19:55:43.024254 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.117772 kubelet[1369]: E1002 19:55:43.117728 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:43.125124 kubelet[1369]: E1002 19:55:43.125057 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.225298 kubelet[1369]: E1002 19:55:43.225164 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.326103 kubelet[1369]: E1002 19:55:43.325925 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.426345 kubelet[1369]: E1002 19:55:43.426046 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.527192 kubelet[1369]: E1002 19:55:43.527145 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.627979 kubelet[1369]: E1002 19:55:43.627919 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.728695 kubelet[1369]: E1002 19:55:43.728508 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.778963 kubelet[1369]: E1002 19:55:43.778879 1369 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.32\" not found" node="172.24.4.32" Oct 2 19:55:43.829551 kubelet[1369]: E1002 19:55:43.829497 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.853143 kubelet[1369]: I1002 19:55:43.853081 1369 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.32" Oct 2 19:55:43.930577 kubelet[1369]: E1002 19:55:43.930513 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:43.968289 kubelet[1369]: I1002 19:55:43.968228 1369 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.32" Oct 2 19:55:44.031297 kubelet[1369]: E1002 19:55:44.031067 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:44.118499 kubelet[1369]: E1002 19:55:44.118445 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:44.132172 kubelet[1369]: E1002 19:55:44.132129 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:44.232487 kubelet[1369]: E1002 19:55:44.232447 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:44.315000 audit[1183]: USER_END pid=1183 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:44.316270 sudo[1183]: pam_unix(sudo:session): session closed for user root Oct 2 19:55:44.318947 kernel: kauditd_printk_skb: 101 callbacks suppressed Oct 2 19:55:44.319069 kernel: audit: type=1106 audit(1696276544.315:570): pid=1183 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:44.316000 audit[1183]: CRED_DISP pid=1183 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:44.333341 kubelet[1369]: E1002 19:55:44.333284 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:44.337447 kernel: audit: type=1104 audit(1696276544.316:571): pid=1183 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:55:44.434569 kubelet[1369]: E1002 19:55:44.434524 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:44.466935 sshd[1179]: pam_unix(sshd:session): session closed for user core Oct 2 19:55:44.468000 audit[1179]: USER_END pid=1179 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:44.482473 kernel: audit: type=1106 audit(1696276544.468:572): pid=1179 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:44.482587 systemd[1]: sshd@6-172.24.4.32:22-172.24.4.1:52130.service: Deactivated successfully. Oct 2 19:55:44.468000 audit[1179]: CRED_DISP pid=1179 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:44.484158 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:55:44.486662 systemd-logind[1037]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:55:44.488780 systemd-logind[1037]: Removed session 7. Oct 2 19:55:44.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.32:22-172.24.4.1:52130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:44.502638 kernel: audit: type=1104 audit(1696276544.468:573): pid=1179 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 19:55:44.502730 kernel: audit: type=1131 audit(1696276544.482:574): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.32:22-172.24.4.1:52130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:44.535718 kubelet[1369]: E1002 19:55:44.535671 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:44.637358 kubelet[1369]: E1002 19:55:44.637316 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:44.738796 kubelet[1369]: E1002 19:55:44.738743 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:44.838967 kubelet[1369]: E1002 19:55:44.838917 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:44.939556 kubelet[1369]: E1002 19:55:44.939332 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.040645 kubelet[1369]: E1002 19:55:45.040527 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.118728 kubelet[1369]: E1002 19:55:45.118673 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:45.141603 kubelet[1369]: E1002 19:55:45.141485 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.242591 kubelet[1369]: E1002 19:55:45.242160 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.342490 kubelet[1369]: E1002 19:55:45.342340 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.443468 kubelet[1369]: E1002 19:55:45.443195 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.544655 kubelet[1369]: E1002 19:55:45.544242 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.644647 kubelet[1369]: E1002 19:55:45.644472 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.745450 kubelet[1369]: E1002 19:55:45.745363 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.846600 kubelet[1369]: E1002 19:55:45.846380 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:45.947493 kubelet[1369]: E1002 19:55:45.947390 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.048944 kubelet[1369]: E1002 19:55:46.048651 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.119356 kubelet[1369]: E1002 19:55:46.119280 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:46.149213 kubelet[1369]: E1002 19:55:46.149029 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.216488 kubelet[1369]: E1002 19:55:46.216374 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:46.249668 kubelet[1369]: E1002 19:55:46.249572 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.350833 kubelet[1369]: E1002 19:55:46.350793 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.452309 kubelet[1369]: E1002 19:55:46.451746 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.553529 kubelet[1369]: E1002 19:55:46.553483 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.654664 kubelet[1369]: E1002 19:55:46.654562 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.755488 kubelet[1369]: E1002 19:55:46.754782 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.856798 kubelet[1369]: E1002 19:55:46.856724 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:46.957836 kubelet[1369]: E1002 19:55:46.957789 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.059351 kubelet[1369]: E1002 19:55:47.058918 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.119582 kubelet[1369]: E1002 19:55:47.119536 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:47.159578 kubelet[1369]: E1002 19:55:47.159513 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.259806 kubelet[1369]: E1002 19:55:47.259747 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.361247 kubelet[1369]: E1002 19:55:47.360661 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.461069 kubelet[1369]: E1002 19:55:47.460998 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.561998 kubelet[1369]: E1002 19:55:47.561941 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.663488 kubelet[1369]: E1002 19:55:47.662987 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.763844 kubelet[1369]: E1002 19:55:47.763794 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.864570 kubelet[1369]: E1002 19:55:47.864493 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:47.965329 kubelet[1369]: E1002 19:55:47.964898 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.065944 kubelet[1369]: E1002 19:55:48.065885 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.120681 kubelet[1369]: E1002 19:55:48.120537 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:48.166541 kubelet[1369]: E1002 19:55:48.166487 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.268056 kubelet[1369]: E1002 19:55:48.267577 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.369494 kubelet[1369]: E1002 19:55:48.369390 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.470141 kubelet[1369]: E1002 19:55:48.470082 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.571850 kubelet[1369]: E1002 19:55:48.571297 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.672956 kubelet[1369]: E1002 19:55:48.672850 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.773945 kubelet[1369]: E1002 19:55:48.773846 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.875054 kubelet[1369]: E1002 19:55:48.874967 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:48.976064 kubelet[1369]: E1002 19:55:48.975929 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:49.077112 kubelet[1369]: E1002 19:55:49.077016 1369 kubelet.go:2448] "Error getting node" err="node \"172.24.4.32\" not found" Oct 2 19:55:49.121149 kubelet[1369]: I1002 19:55:49.121098 1369 apiserver.go:52] "Watching apiserver" Oct 2 19:55:49.121364 kubelet[1369]: E1002 19:55:49.121334 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:49.126472 kubelet[1369]: I1002 19:55:49.125732 1369 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:55:49.126785 kubelet[1369]: I1002 19:55:49.126723 1369 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:55:49.137441 kubelet[1369]: I1002 19:55:49.135545 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a54de888-4579-4517-9b55-763d5150bc99-kube-proxy\") pod \"kube-proxy-dzjzl\" (UID: \"a54de888-4579-4517-9b55-763d5150bc99\") " pod="kube-system/kube-proxy-dzjzl" Oct 2 19:55:49.137441 kubelet[1369]: I1002 19:55:49.135636 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cilium-run\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.137441 kubelet[1369]: I1002 19:55:49.135695 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-bpf-maps\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.137441 kubelet[1369]: I1002 19:55:49.135752 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-lib-modules\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.137441 kubelet[1369]: I1002 19:55:49.135811 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/749d446f-a980-4e1d-bfed-f215397bd061-cilium-config-path\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.137441 kubelet[1369]: I1002 19:55:49.135901 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-host-proc-sys-net\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.139833 kubelet[1369]: I1002 19:55:49.135960 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cilium-cgroup\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.139833 kubelet[1369]: I1002 19:55:49.136015 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-hostproc\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.139833 kubelet[1369]: I1002 19:55:49.136071 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-xtables-lock\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.139833 kubelet[1369]: I1002 19:55:49.136125 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/749d446f-a980-4e1d-bfed-f215397bd061-hubble-tls\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.139833 kubelet[1369]: I1002 19:55:49.136185 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a54de888-4579-4517-9b55-763d5150bc99-xtables-lock\") pod \"kube-proxy-dzjzl\" (UID: \"a54de888-4579-4517-9b55-763d5150bc99\") " pod="kube-system/kube-proxy-dzjzl" Oct 2 19:55:49.139833 kubelet[1369]: I1002 19:55:49.136255 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csxhj\" (UniqueName: \"kubernetes.io/projected/a54de888-4579-4517-9b55-763d5150bc99-kube-api-access-csxhj\") pod \"kube-proxy-dzjzl\" (UID: \"a54de888-4579-4517-9b55-763d5150bc99\") " pod="kube-system/kube-proxy-dzjzl" Oct 2 19:55:49.140201 kubelet[1369]: I1002 19:55:49.136310 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a54de888-4579-4517-9b55-763d5150bc99-lib-modules\") pod \"kube-proxy-dzjzl\" (UID: \"a54de888-4579-4517-9b55-763d5150bc99\") " pod="kube-system/kube-proxy-dzjzl" Oct 2 19:55:49.140201 kubelet[1369]: I1002 19:55:49.136364 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cni-path\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.140201 kubelet[1369]: I1002 19:55:49.136456 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-etc-cni-netd\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.140201 kubelet[1369]: I1002 19:55:49.136520 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/749d446f-a980-4e1d-bfed-f215397bd061-clustermesh-secrets\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.140201 kubelet[1369]: I1002 19:55:49.136616 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-host-proc-sys-kernel\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.140201 kubelet[1369]: I1002 19:55:49.136680 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qmwp\" (UniqueName: \"kubernetes.io/projected/749d446f-a980-4e1d-bfed-f215397bd061-kube-api-access-6qmwp\") pod \"cilium-x8vt6\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " pod="kube-system/cilium-x8vt6" Oct 2 19:55:49.140775 kubelet[1369]: I1002 19:55:49.136697 1369 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:55:49.141220 systemd[1]: Created slice kubepods-besteffort-poda54de888_4579_4517_9b55_763d5150bc99.slice. Oct 2 19:55:49.158120 systemd[1]: Created slice kubepods-burstable-pod749d446f_a980_4e1d_bfed_f215397bd061.slice. Oct 2 19:55:49.177796 kubelet[1369]: I1002 19:55:49.177747 1369 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:55:49.178862 env[1043]: time="2023-10-02T19:55:49.178791300Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:55:49.179510 kubelet[1369]: I1002 19:55:49.179239 1369 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:55:49.179876 kubelet[1369]: E1002 19:55:49.179844 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:49.752874 env[1043]: time="2023-10-02T19:55:49.752709162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzjzl,Uid:a54de888-4579-4517-9b55-763d5150bc99,Namespace:kube-system,Attempt:0,}" Oct 2 19:55:49.770927 env[1043]: time="2023-10-02T19:55:49.770853021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8vt6,Uid:749d446f-a980-4e1d-bfed-f215397bd061,Namespace:kube-system,Attempt:0,}" Oct 2 19:55:50.122648 kubelet[1369]: E1002 19:55:50.122599 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:50.588589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774551002.mount: Deactivated successfully. Oct 2 19:55:50.618519 env[1043]: time="2023-10-02T19:55:50.618085305Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:50.622754 env[1043]: time="2023-10-02T19:55:50.622595875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:50.629700 env[1043]: time="2023-10-02T19:55:50.629610020Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:50.638432 env[1043]: time="2023-10-02T19:55:50.638337268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:50.642470 env[1043]: time="2023-10-02T19:55:50.642367887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:50.650367 env[1043]: time="2023-10-02T19:55:50.650294824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:50.660184 env[1043]: time="2023-10-02T19:55:50.660042626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:50.662671 env[1043]: time="2023-10-02T19:55:50.662617445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:55:50.707761 env[1043]: time="2023-10-02T19:55:50.707669150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:55:50.707761 env[1043]: time="2023-10-02T19:55:50.707707181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:55:50.707761 env[1043]: time="2023-10-02T19:55:50.707720195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:55:50.708117 env[1043]: time="2023-10-02T19:55:50.707838918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574 pid=1472 runtime=io.containerd.runc.v2 Oct 2 19:55:50.718025 env[1043]: time="2023-10-02T19:55:50.717880671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:55:50.718747 env[1043]: time="2023-10-02T19:55:50.718668288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:55:50.719116 env[1043]: time="2023-10-02T19:55:50.718943955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:55:50.719707 env[1043]: time="2023-10-02T19:55:50.719638238Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7230638f949a4f6f35fc91ba3d46f4867b3293c824d02cccc783225df92a7e3b pid=1468 runtime=io.containerd.runc.v2 Oct 2 19:55:50.723320 systemd[1]: Started cri-containerd-c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574.scope. Oct 2 19:55:50.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750541 kernel: audit: type=1400 audit(1696276550.743:575): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750682 kernel: audit: type=1400 audit(1696276550.743:576): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754871 kernel: audit: type=1400 audit(1696276550.743:577): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.759488 kernel: audit: type=1400 audit(1696276550.743:578): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.760191 systemd[1]: Started cri-containerd-7230638f949a4f6f35fc91ba3d46f4867b3293c824d02cccc783225df92a7e3b.scope. Oct 2 19:55:50.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.770316 kernel: audit: type=1400 audit(1696276550.743:579): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.770390 kernel: audit: type=1400 audit(1696276550.743:580): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.775439 kernel: audit: type=1400 audit(1696276550.743:581): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.783928 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:55:50.784023 kernel: audit: type=1400 audit(1696276550.743:582): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.784045 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:55:50.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit: BPF prog-id=64 op=LOAD Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1472 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:50.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333666338383766373531313363666535653564663138353432616363 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1472 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:50.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333666338383766373531313363666535653564663138353432616363 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.750000 audit: BPF prog-id=65 op=LOAD Oct 2 19:55:50.750000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0001851e0 items=0 ppid=1472 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:50.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333666338383766373531313363666535653564663138353432616363 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit: BPF prog-id=66 op=LOAD Oct 2 19:55:50.754000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000185228 items=0 ppid=1472 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:50.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333666338383766373531313363666535653564663138353432616363 Oct 2 19:55:50.754000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:55:50.754000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.754000 audit: BPF prog-id=67 op=LOAD Oct 2 19:55:50.754000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000185638 items=0 ppid=1472 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:50.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333666338383766373531313363666535653564663138353432616363 Oct 2 19:55:50.776000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.776000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.776000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.776000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.776000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.776000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.776000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.776000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.776000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.777000 audit: BPF prog-id=68 op=LOAD Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1468 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:50.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732333036333866393439613466366633356663393162613364343666 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.787000 audit: BPF prog-id=69 op=LOAD Oct 2 19:55:50.787000 audit[1501]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001459d8 a2=78 a3=c000185dc0 items=0 ppid=1468 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:50.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732333036333866393439613466366633356663393162613364343666 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit: BPF prog-id=70 op=LOAD Oct 2 19:55:50.788000 audit[1501]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c000185e08 items=0 ppid=1468 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:50.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732333036333866393439613466366633356663393162613364343666 Oct 2 19:55:50.788000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:55:50.788000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { perfmon } for pid=1501 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit[1501]: AVC avc: denied { bpf } for pid=1501 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:55:50.788000 audit: BPF prog-id=71 op=LOAD Oct 2 19:55:50.788000 audit[1501]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c0001c6218 items=0 ppid=1468 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:55:50.788000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732333036333866393439613466366633356663393162613364343666 Oct 2 19:55:50.798031 env[1043]: time="2023-10-02T19:55:50.797979426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8vt6,Uid:749d446f-a980-4e1d-bfed-f215397bd061,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\"" Oct 2 19:55:50.800091 env[1043]: time="2023-10-02T19:55:50.800068484Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:55:50.807431 env[1043]: time="2023-10-02T19:55:50.807377172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzjzl,Uid:a54de888-4579-4517-9b55-763d5150bc99,Namespace:kube-system,Attempt:0,} returns sandbox id \"7230638f949a4f6f35fc91ba3d46f4867b3293c824d02cccc783225df92a7e3b\"" Oct 2 19:55:51.111567 kubelet[1369]: E1002 19:55:51.111510 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:51.123840 kubelet[1369]: E1002 19:55:51.123722 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:51.219646 kubelet[1369]: E1002 19:55:51.219549 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:52.124841 kubelet[1369]: E1002 19:55:52.124788 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:53.126687 kubelet[1369]: E1002 19:55:53.126610 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:54.127523 kubelet[1369]: E1002 19:55:54.127453 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:55.128323 kubelet[1369]: E1002 19:55:55.128246 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:56.129071 kubelet[1369]: E1002 19:55:56.129016 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:56.224911 kubelet[1369]: E1002 19:55:56.224845 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:55:57.129718 kubelet[1369]: E1002 19:55:57.129636 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:57.336577 update_engine[1038]: I1002 19:55:57.336437 1038 update_attempter.cc:505] Updating boot flags... Oct 2 19:55:58.130504 kubelet[1369]: E1002 19:55:58.130457 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:55:58.166739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2759240449.mount: Deactivated successfully. Oct 2 19:55:59.130870 kubelet[1369]: E1002 19:55:59.130815 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:00.131322 kubelet[1369]: E1002 19:56:00.131250 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:01.131808 kubelet[1369]: E1002 19:56:01.131771 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:01.226001 kubelet[1369]: E1002 19:56:01.225956 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:02.132394 kubelet[1369]: E1002 19:56:02.132327 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:02.696274 env[1043]: time="2023-10-02T19:56:02.696207109Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:02.699511 env[1043]: time="2023-10-02T19:56:02.699466482Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:02.701726 env[1043]: time="2023-10-02T19:56:02.701661499Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:02.702077 env[1043]: time="2023-10-02T19:56:02.702033066Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 19:56:02.702998 env[1043]: time="2023-10-02T19:56:02.702947711Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:56:02.707117 env[1043]: time="2023-10-02T19:56:02.707073850Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:56:02.722952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217724378.mount: Deactivated successfully. Oct 2 19:56:02.730516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770299549.mount: Deactivated successfully. Oct 2 19:56:02.739790 env[1043]: time="2023-10-02T19:56:02.739749684Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1\"" Oct 2 19:56:02.741042 env[1043]: time="2023-10-02T19:56:02.741018563Z" level=info msg="StartContainer for \"2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1\"" Oct 2 19:56:02.764727 systemd[1]: Started cri-containerd-2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1.scope. Oct 2 19:56:02.784983 systemd[1]: cri-containerd-2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1.scope: Deactivated successfully. Oct 2 19:56:03.132856 kubelet[1369]: E1002 19:56:03.132783 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:03.360806 env[1043]: time="2023-10-02T19:56:03.360645313Z" level=info msg="shim disconnected" id=2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1 Oct 2 19:56:03.361158 env[1043]: time="2023-10-02T19:56:03.361066543Z" level=warning msg="cleaning up after shim disconnected" id=2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1 namespace=k8s.io Oct 2 19:56:03.361158 env[1043]: time="2023-10-02T19:56:03.361143227Z" level=info msg="cleaning up dead shim" Oct 2 19:56:03.389189 env[1043]: time="2023-10-02T19:56:03.388946482Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1580 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:03Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:03.393012 env[1043]: time="2023-10-02T19:56:03.392763501Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Oct 2 19:56:03.393544 env[1043]: time="2023-10-02T19:56:03.393371661Z" level=error msg="Failed to pipe stdout of container \"2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1\"" error="reading from a closed fifo" Oct 2 19:56:03.393674 env[1043]: time="2023-10-02T19:56:03.393610028Z" level=error msg="Failed to pipe stderr of container \"2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1\"" error="reading from a closed fifo" Oct 2 19:56:03.398297 env[1043]: time="2023-10-02T19:56:03.398164741Z" level=error msg="StartContainer for \"2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:03.398682 kubelet[1369]: E1002 19:56:03.398589 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1" Oct 2 19:56:03.398862 kubelet[1369]: E1002 19:56:03.398788 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:03.398862 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:03.398862 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 19:56:03.398862 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6qmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:03.399335 kubelet[1369]: E1002 19:56:03.398877 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:56:03.477839 env[1043]: time="2023-10-02T19:56:03.477713074Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:56:03.514832 env[1043]: time="2023-10-02T19:56:03.514659857Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\"" Oct 2 19:56:03.516090 env[1043]: time="2023-10-02T19:56:03.516033063Z" level=info msg="StartContainer for \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\"" Oct 2 19:56:03.558305 systemd[1]: Started cri-containerd-1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3.scope. Oct 2 19:56:03.579829 systemd[1]: cri-containerd-1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3.scope: Deactivated successfully. Oct 2 19:56:03.588815 env[1043]: time="2023-10-02T19:56:03.588752678Z" level=info msg="shim disconnected" id=1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3 Oct 2 19:56:03.588961 env[1043]: time="2023-10-02T19:56:03.588816849Z" level=warning msg="cleaning up after shim disconnected" id=1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3 namespace=k8s.io Oct 2 19:56:03.588961 env[1043]: time="2023-10-02T19:56:03.588830044Z" level=info msg="cleaning up dead shim" Oct 2 19:56:03.598424 env[1043]: time="2023-10-02T19:56:03.598336092Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1619 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:03Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:03.598704 env[1043]: time="2023-10-02T19:56:03.598635924Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Oct 2 19:56:03.598892 env[1043]: time="2023-10-02T19:56:03.598844325Z" level=error msg="Failed to pipe stdout of container \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\"" error="reading from a closed fifo" Oct 2 19:56:03.602489 env[1043]: time="2023-10-02T19:56:03.602443646Z" level=error msg="Failed to pipe stderr of container \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\"" error="reading from a closed fifo" Oct 2 19:56:03.606164 env[1043]: time="2023-10-02T19:56:03.606129479Z" level=error msg="StartContainer for \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:03.606550 kubelet[1369]: E1002 19:56:03.606380 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3" Oct 2 19:56:03.607053 kubelet[1369]: E1002 19:56:03.606709 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:03.607053 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:03.607053 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 19:56:03.607053 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6qmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:03.607252 kubelet[1369]: E1002 19:56:03.606753 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:56:03.722554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1-rootfs.mount: Deactivated successfully. Oct 2 19:56:04.133026 kubelet[1369]: E1002 19:56:04.132995 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:04.477613 kubelet[1369]: I1002 19:56:04.476065 1369 scope.go:115] "RemoveContainer" containerID="2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1" Oct 2 19:56:04.477613 kubelet[1369]: I1002 19:56:04.476817 1369 scope.go:115] "RemoveContainer" containerID="2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1" Oct 2 19:56:04.483835 env[1043]: time="2023-10-02T19:56:04.482941399Z" level=info msg="RemoveContainer for \"2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1\"" Oct 2 19:56:04.490149 env[1043]: time="2023-10-02T19:56:04.490091449Z" level=info msg="RemoveContainer for \"2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1\" returns successfully" Oct 2 19:56:04.490775 env[1043]: time="2023-10-02T19:56:04.490731910Z" level=info msg="RemoveContainer for \"2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1\"" Oct 2 19:56:04.490775 env[1043]: time="2023-10-02T19:56:04.490766886Z" level=info msg="RemoveContainer for \"2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1\" returns successfully" Oct 2 19:56:04.491348 kubelet[1369]: E1002 19:56:04.491305 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:56:04.811596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802062905.mount: Deactivated successfully. Oct 2 19:56:05.133189 kubelet[1369]: E1002 19:56:05.133120 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:05.438958 env[1043]: time="2023-10-02T19:56:05.438587724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:05.441018 env[1043]: time="2023-10-02T19:56:05.440964962Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:05.443290 env[1043]: time="2023-10-02T19:56:05.443223698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:05.445677 env[1043]: time="2023-10-02T19:56:05.445641092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:05.446319 env[1043]: time="2023-10-02T19:56:05.446255224Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 19:56:05.449763 env[1043]: time="2023-10-02T19:56:05.449704112Z" level=info msg="CreateContainer within sandbox \"7230638f949a4f6f35fc91ba3d46f4867b3293c824d02cccc783225df92a7e3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:56:05.472327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598569001.mount: Deactivated successfully. Oct 2 19:56:05.477548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236966252.mount: Deactivated successfully. Oct 2 19:56:05.491085 kubelet[1369]: E1002 19:56:05.490963 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:56:05.494657 env[1043]: time="2023-10-02T19:56:05.494571852Z" level=info msg="CreateContainer within sandbox \"7230638f949a4f6f35fc91ba3d46f4867b3293c824d02cccc783225df92a7e3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23dc3d4c7caf35c2ccd4275ad29125bd59c0aaa1ff17600efae80f6a2cfb6e8b\"" Oct 2 19:56:05.496953 env[1043]: time="2023-10-02T19:56:05.496901060Z" level=info msg="StartContainer for \"23dc3d4c7caf35c2ccd4275ad29125bd59c0aaa1ff17600efae80f6a2cfb6e8b\"" Oct 2 19:56:05.520845 systemd[1]: Started cri-containerd-23dc3d4c7caf35c2ccd4275ad29125bd59c0aaa1ff17600efae80f6a2cfb6e8b.scope. Oct 2 19:56:05.554560 kernel: kauditd_printk_skb: 104 callbacks suppressed Oct 2 19:56:05.554841 kernel: audit: type=1400 audit(1696276565.547:610): avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=1468 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.561426 kernel: audit: type=1300 audit(1696276565.547:610): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=1468 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646333643463376361663335633263636434323735616432393132 Oct 2 19:56:05.566480 kernel: audit: type=1327 audit(1696276565.547:610): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646333643463376361663335633263636434323735616432393132 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.570436 kernel: audit: type=1400 audit(1696276565.547:611): avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.577460 kernel: audit: type=1400 audit(1696276565.547:611): avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.582436 kernel: audit: type=1400 audit(1696276565.547:611): avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.591424 kernel: audit: type=1400 audit(1696276565.547:611): avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.591517 kernel: audit: type=1400 audit(1696276565.547:611): avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.591539 kernel: audit: type=1400 audit(1696276565.547:611): avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.598879 kernel: audit: type=1400 audit(1696276565.547:611): avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit: BPF prog-id=72 op=LOAD Oct 2 19:56:05.547000 audit[1639]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014d9d8 a2=78 a3=c00038cca0 items=0 ppid=1468 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646333643463376361663335633263636434323735616432393132 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit: BPF prog-id=73 op=LOAD Oct 2 19:56:05.547000 audit[1639]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00014d770 a2=78 a3=c00038cce8 items=0 ppid=1468 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646333643463376361663335633263636434323735616432393132 Oct 2 19:56:05.547000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:56:05.547000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { perfmon } for pid=1639 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit[1639]: AVC avc: denied { bpf } for pid=1639 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:05.547000 audit: BPF prog-id=74 op=LOAD Oct 2 19:56:05.547000 audit[1639]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014dc30 a2=78 a3=c00038cd78 items=0 ppid=1468 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646333643463376361663335633263636434323735616432393132 Oct 2 19:56:05.600967 env[1043]: time="2023-10-02T19:56:05.600914006Z" level=info msg="StartContainer for \"23dc3d4c7caf35c2ccd4275ad29125bd59c0aaa1ff17600efae80f6a2cfb6e8b\" returns successfully" Oct 2 19:56:05.644803 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:56:05.645017 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:56:05.645076 kernel: IPVS: ipvs loaded. Oct 2 19:56:05.668532 kernel: IPVS: [rr] scheduler registered. Oct 2 19:56:05.679537 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:56:05.688481 kernel: IPVS: [sh] scheduler registered. Oct 2 19:56:05.753000 audit[1699]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.753000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe577740e0 a2=0 a3=7ffe577740cc items=0 ppid=1651 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:56:05.754000 audit[1700]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1700 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:05.754000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc0986b70 a2=0 a3=7fffc0986b5c items=0 ppid=1651 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.754000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:56:05.755000 audit[1701]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=1701 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:05.755000 audit[1701]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc7981260 a2=0 a3=7ffdc798124c items=0 ppid=1651 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.755000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:56:05.756000 audit[1702]: NETFILTER_CFG table=filter:38 family=10 entries=1 op=nft_register_chain pid=1702 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:05.756000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc109b3620 a2=0 a3=7ffc109b360c items=0 ppid=1651 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:56:05.757000 audit[1703]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=1703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.757000 audit[1703]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc7024450 a2=0 a3=7fffc702443c items=0 ppid=1651 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:56:05.758000 audit[1704]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=1704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.758000 audit[1704]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff563ae420 a2=0 a3=7fff563ae40c items=0 ppid=1651 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:56:05.870000 audit[1705]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.870000 audit[1705]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff1e55df00 a2=0 a3=7fff1e55deec items=0 ppid=1651 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:56:05.877000 audit[1707]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.877000 audit[1707]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdae7abdf0 a2=0 a3=7ffdae7abddc items=0 ppid=1651 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:56:05.885000 audit[1710]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1710 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.885000 audit[1710]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe597077a0 a2=0 a3=7ffe5970778c items=0 ppid=1651 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.885000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:56:05.888000 audit[1711]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.888000 audit[1711]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecb789db0 a2=0 a3=7ffecb789d9c items=0 ppid=1651 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:56:05.894000 audit[1713]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1713 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.894000 audit[1713]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe507bd4d0 a2=0 a3=7ffe507bd4bc items=0 ppid=1651 pid=1713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:56:05.898000 audit[1714]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.898000 audit[1714]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3567ea80 a2=0 a3=7ffc3567ea6c items=0 ppid=1651 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:56:05.903000 audit[1716]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.903000 audit[1716]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcdea97e50 a2=0 a3=7ffcdea97e3c items=0 ppid=1651 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:56:05.913000 audit[1719]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.913000 audit[1719]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd6bf038c0 a2=0 a3=7ffd6bf038ac items=0 ppid=1651 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:56:05.915000 audit[1720]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.915000 audit[1720]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec519c4a0 a2=0 a3=7ffec519c48c items=0 ppid=1651 pid=1720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:56:05.921000 audit[1722]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.921000 audit[1722]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd300c4520 a2=0 a3=7ffd300c450c items=0 ppid=1651 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:56:05.923000 audit[1723]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.923000 audit[1723]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd74096210 a2=0 a3=7ffd740961fc items=0 ppid=1651 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:56:05.929000 audit[1725]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.929000 audit[1725]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff5acfca0 a2=0 a3=7ffff5acfc8c items=0 ppid=1651 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:05.937000 audit[1728]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.937000 audit[1728]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8f7738c0 a2=0 a3=7fff8f7738ac items=0 ppid=1651 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:05.946000 audit[1731]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.946000 audit[1731]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff969474f0 a2=0 a3=7fff969474dc items=0 ppid=1651 pid=1731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:56:05.949000 audit[1732]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.949000 audit[1732]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf18949e0 a2=0 a3=7ffcf18949cc items=0 ppid=1651 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.949000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:56:05.954000 audit[1734]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.954000 audit[1734]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff89a885a0 a2=0 a3=7fff89a8858c items=0 ppid=1651 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.954000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:05.962000 audit[1737]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:05.962000 audit[1737]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffff820dc0 a2=0 a3=7fffff820dac items=0 ppid=1651 pid=1737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:05.989000 audit[1741]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1741 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:56:05.989000 audit[1741]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fffab86c1f0 a2=0 a3=7fffab86c1dc items=0 ppid=1651 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:06.011000 audit[1741]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1741 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:56:06.011000 audit[1741]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffab86c1f0 a2=0 a3=7fffab86c1dc items=0 ppid=1651 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:06.018000 audit[1745]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.018000 audit[1745]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffef8f36da0 a2=0 a3=7ffef8f36d8c items=0 ppid=1651 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:56:06.022000 audit[1747]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1747 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.022000 audit[1747]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc37b24140 a2=0 a3=7ffc37b2412c items=0 ppid=1651 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:56:06.026000 audit[1750]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1750 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.026000 audit[1750]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffad540b90 a2=0 a3=7fffad540b7c items=0 ppid=1651 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:56:06.028000 audit[1751]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.028000 audit[1751]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfaacd370 a2=0 a3=7ffdfaacd35c items=0 ppid=1651 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:56:06.031000 audit[1753]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1753 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.031000 audit[1753]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3f4072d0 a2=0 a3=7fff3f4072bc items=0 ppid=1651 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:56:06.032000 audit[1754]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.032000 audit[1754]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd07a2fd00 a2=0 a3=7ffd07a2fcec items=0 ppid=1651 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:56:06.035000 audit[1756]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1756 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.035000 audit[1756]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffde3e53780 a2=0 a3=7ffde3e5376c items=0 ppid=1651 pid=1756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.035000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:56:06.039000 audit[1759]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1759 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.039000 audit[1759]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc8d230400 a2=0 a3=7ffc8d2303ec items=0 ppid=1651 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:56:06.042000 audit[1760]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1760 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.042000 audit[1760]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7c115bd0 a2=0 a3=7ffe7c115bbc items=0 ppid=1651 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:56:06.047000 audit[1762]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1762 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.047000 audit[1762]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5e39f090 a2=0 a3=7fff5e39f07c items=0 ppid=1651 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:56:06.050000 audit[1763]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1763 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.050000 audit[1763]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe661e0290 a2=0 a3=7ffe661e027c items=0 ppid=1651 pid=1763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:56:06.058000 audit[1765]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1765 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.058000 audit[1765]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff89a92cd0 a2=0 a3=7fff89a92cbc items=0 ppid=1651 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:06.066000 audit[1768]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.066000 audit[1768]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe338f5300 a2=0 a3=7ffe338f52ec items=0 ppid=1651 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:56:06.074000 audit[1771]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.074000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee1c752c0 a2=0 a3=7ffee1c752ac items=0 ppid=1651 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:56:06.077000 audit[1772]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.077000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffee5ac7a0 a2=0 a3=7fffee5ac78c items=0 ppid=1651 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:56:06.083000 audit[1774]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.083000 audit[1774]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe0a221bc0 a2=0 a3=7ffe0a221bac items=0 ppid=1651 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:06.088000 audit[1777]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:06.088000 audit[1777]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd4a0fa580 a2=0 a3=7ffd4a0fa56c items=0 ppid=1651 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:06.094000 audit[1781]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:56:06.094000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc4a981350 a2=0 a3=7ffc4a98133c items=0 ppid=1651 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.094000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:06.095000 audit[1781]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:56:06.095000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7ffc4a981350 a2=0 a3=7ffc4a98133c items=0 ppid=1651 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.095000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:06.133836 kubelet[1369]: E1002 19:56:06.133635 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:06.227905 kubelet[1369]: E1002 19:56:06.227805 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:06.510458 kubelet[1369]: W1002 19:56:06.510354 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod749d446f_a980_4e1d_bfed_f215397bd061.slice/cri-containerd-2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1.scope WatchSource:0}: container "2b3743a051858c98dbef78ace18c50385b018d2bb56776c73d0e37768294c1f1" in namespace "k8s.io": not found Oct 2 19:56:07.134300 kubelet[1369]: E1002 19:56:07.134242 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:08.135963 kubelet[1369]: E1002 19:56:08.135770 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:09.136032 kubelet[1369]: E1002 19:56:09.135970 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:09.623103 kubelet[1369]: W1002 19:56:09.623046 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod749d446f_a980_4e1d_bfed_f215397bd061.slice/cri-containerd-1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3.scope WatchSource:0}: task 1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3 not found: not found Oct 2 19:56:10.137178 kubelet[1369]: E1002 19:56:10.137076 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:11.111257 kubelet[1369]: E1002 19:56:11.111155 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:11.137717 kubelet[1369]: E1002 19:56:11.137639 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:11.229346 kubelet[1369]: E1002 19:56:11.229261 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:12.138380 kubelet[1369]: E1002 19:56:12.138307 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:13.140275 kubelet[1369]: E1002 19:56:13.140150 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:14.141136 kubelet[1369]: E1002 19:56:14.141023 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:15.141326 kubelet[1369]: E1002 19:56:15.141256 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:16.142502 kubelet[1369]: E1002 19:56:16.142351 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:16.231053 kubelet[1369]: E1002 19:56:16.230961 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:17.143505 kubelet[1369]: E1002 19:56:17.143387 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:18.145074 kubelet[1369]: E1002 19:56:18.145007 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:18.395045 env[1043]: time="2023-10-02T19:56:18.394665563Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:56:18.423932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840167328.mount: Deactivated successfully. Oct 2 19:56:18.441841 env[1043]: time="2023-10-02T19:56:18.441759790Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\"" Oct 2 19:56:18.453719 env[1043]: time="2023-10-02T19:56:18.453652796Z" level=info msg="StartContainer for \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\"" Oct 2 19:56:18.499558 systemd[1]: Started cri-containerd-952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7.scope. Oct 2 19:56:18.527964 systemd[1]: cri-containerd-952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7.scope: Deactivated successfully. Oct 2 19:56:18.871047 env[1043]: time="2023-10-02T19:56:18.870947781Z" level=info msg="shim disconnected" id=952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7 Oct 2 19:56:18.871747 env[1043]: time="2023-10-02T19:56:18.871698141Z" level=warning msg="cleaning up after shim disconnected" id=952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7 namespace=k8s.io Oct 2 19:56:18.871918 env[1043]: time="2023-10-02T19:56:18.871882600Z" level=info msg="cleaning up dead shim" Oct 2 19:56:18.890130 env[1043]: time="2023-10-02T19:56:18.890022403Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1807 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:18Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:18.890699 env[1043]: time="2023-10-02T19:56:18.890588114Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:56:18.891643 env[1043]: time="2023-10-02T19:56:18.891539084Z" level=error msg="Failed to pipe stdout of container \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\"" error="reading from a closed fifo" Oct 2 19:56:18.891836 env[1043]: time="2023-10-02T19:56:18.891555816Z" level=error msg="Failed to pipe stderr of container \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\"" error="reading from a closed fifo" Oct 2 19:56:18.896351 env[1043]: time="2023-10-02T19:56:18.896242769Z" level=error msg="StartContainer for \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:18.896990 kubelet[1369]: E1002 19:56:18.896872 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7" Oct 2 19:56:18.899559 kubelet[1369]: E1002 19:56:18.897310 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:18.899559 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:18.899559 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 19:56:18.899559 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6qmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:18.900122 kubelet[1369]: E1002 19:56:18.897550 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:56:19.147548 kubelet[1369]: E1002 19:56:19.146604 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:19.417388 systemd[1]: run-containerd-runc-k8s.io-952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7-runc.5viVXl.mount: Deactivated successfully. Oct 2 19:56:19.418098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7-rootfs.mount: Deactivated successfully. Oct 2 19:56:19.537670 kubelet[1369]: I1002 19:56:19.537614 1369 scope.go:115] "RemoveContainer" containerID="1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3" Oct 2 19:56:19.538359 kubelet[1369]: I1002 19:56:19.538299 1369 scope.go:115] "RemoveContainer" containerID="1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3" Oct 2 19:56:19.540948 env[1043]: time="2023-10-02T19:56:19.540819123Z" level=info msg="RemoveContainer for \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\"" Oct 2 19:56:19.541874 env[1043]: time="2023-10-02T19:56:19.541810870Z" level=info msg="RemoveContainer for \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\"" Oct 2 19:56:19.542392 env[1043]: time="2023-10-02T19:56:19.542320915Z" level=error msg="RemoveContainer for \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\" failed" error="failed to set removing state for container \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\": container is already in removing state" Oct 2 19:56:19.542960 kubelet[1369]: E1002 19:56:19.542909 1369 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\": container is already in removing state" containerID="1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3" Oct 2 19:56:19.543102 kubelet[1369]: E1002 19:56:19.542987 1369 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3": container is already in removing state; Skipping pod "cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)" Oct 2 19:56:19.543729 kubelet[1369]: E1002 19:56:19.543647 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:56:19.547842 env[1043]: time="2023-10-02T19:56:19.547788871Z" level=info msg="RemoveContainer for \"1496ec34d7578a330e4f51b2a747ca201b1d46415b754f5b0e24a7b7438836f3\" returns successfully" Oct 2 19:56:20.146878 kubelet[1369]: E1002 19:56:20.146817 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:21.148236 kubelet[1369]: E1002 19:56:21.148168 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:21.232291 kubelet[1369]: E1002 19:56:21.232164 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:21.979185 kubelet[1369]: W1002 19:56:21.979094 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod749d446f_a980_4e1d_bfed_f215397bd061.slice/cri-containerd-952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7.scope WatchSource:0}: task 952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7 not found: not found Oct 2 19:56:22.149444 kubelet[1369]: E1002 19:56:22.149283 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:23.149684 kubelet[1369]: E1002 19:56:23.149552 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:24.150888 kubelet[1369]: E1002 19:56:24.150816 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:25.151787 kubelet[1369]: E1002 19:56:25.151699 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:26.152707 kubelet[1369]: E1002 19:56:26.152584 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:26.233566 kubelet[1369]: E1002 19:56:26.233478 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:27.154869 kubelet[1369]: E1002 19:56:27.154799 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:28.156800 kubelet[1369]: E1002 19:56:28.156721 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:29.158209 kubelet[1369]: E1002 19:56:29.158118 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:30.159389 kubelet[1369]: E1002 19:56:30.159322 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:31.111223 kubelet[1369]: E1002 19:56:31.111147 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:31.161387 kubelet[1369]: E1002 19:56:31.161229 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:31.235284 kubelet[1369]: E1002 19:56:31.235161 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:32.162009 kubelet[1369]: E1002 19:56:32.161888 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:33.164112 kubelet[1369]: E1002 19:56:33.164002 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:34.165052 kubelet[1369]: E1002 19:56:34.164984 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:34.391148 kubelet[1369]: E1002 19:56:34.391041 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:56:35.166748 kubelet[1369]: E1002 19:56:35.166679 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:36.168254 kubelet[1369]: E1002 19:56:36.168050 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:36.237487 kubelet[1369]: E1002 19:56:36.237397 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:37.168917 kubelet[1369]: E1002 19:56:37.168819 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:38.170012 kubelet[1369]: E1002 19:56:38.169938 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:39.171622 kubelet[1369]: E1002 19:56:39.171384 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:40.172680 kubelet[1369]: E1002 19:56:40.172561 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:41.172864 kubelet[1369]: E1002 19:56:41.172779 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:41.239278 kubelet[1369]: E1002 19:56:41.239237 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:42.174633 kubelet[1369]: E1002 19:56:42.174532 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:43.175904 kubelet[1369]: E1002 19:56:43.175769 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:44.176633 kubelet[1369]: E1002 19:56:44.176535 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:45.177301 kubelet[1369]: E1002 19:56:45.177233 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:46.179096 kubelet[1369]: E1002 19:56:46.178985 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:46.241636 kubelet[1369]: E1002 19:56:46.241578 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:47.179963 kubelet[1369]: E1002 19:56:47.179795 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:48.180168 kubelet[1369]: E1002 19:56:48.180079 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:49.181043 kubelet[1369]: E1002 19:56:49.180964 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:49.396509 env[1043]: time="2023-10-02T19:56:49.396388692Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:56:49.416473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3791335593.mount: Deactivated successfully. Oct 2 19:56:49.434029 env[1043]: time="2023-10-02T19:56:49.433343088Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\"" Oct 2 19:56:49.435797 env[1043]: time="2023-10-02T19:56:49.435748354Z" level=info msg="StartContainer for \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\"" Oct 2 19:56:49.489852 systemd[1]: Started cri-containerd-fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574.scope. Oct 2 19:56:49.505355 systemd[1]: cri-containerd-fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574.scope: Deactivated successfully. Oct 2 19:56:49.519651 env[1043]: time="2023-10-02T19:56:49.519598436Z" level=info msg="shim disconnected" id=fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574 Oct 2 19:56:49.519651 env[1043]: time="2023-10-02T19:56:49.519652106Z" level=warning msg="cleaning up after shim disconnected" id=fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574 namespace=k8s.io Oct 2 19:56:49.519866 env[1043]: time="2023-10-02T19:56:49.519665802Z" level=info msg="cleaning up dead shim" Oct 2 19:56:49.528109 env[1043]: time="2023-10-02T19:56:49.528047201Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1850 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:49.528412 env[1043]: time="2023-10-02T19:56:49.528332107Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:56:49.531506 env[1043]: time="2023-10-02T19:56:49.531453158Z" level=error msg="Failed to pipe stdout of container \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\"" error="reading from a closed fifo" Oct 2 19:56:49.531656 env[1043]: time="2023-10-02T19:56:49.531603079Z" level=error msg="Failed to pipe stderr of container \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\"" error="reading from a closed fifo" Oct 2 19:56:49.535010 env[1043]: time="2023-10-02T19:56:49.534972326Z" level=error msg="StartContainer for \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:49.535346 kubelet[1369]: E1002 19:56:49.535323 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574" Oct 2 19:56:49.535475 kubelet[1369]: E1002 19:56:49.535456 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:49.535475 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:49.535475 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 19:56:49.535475 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6qmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:49.535678 kubelet[1369]: E1002 19:56:49.535506 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:56:49.625748 kubelet[1369]: I1002 19:56:49.624373 1369 scope.go:115] "RemoveContainer" containerID="952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7" Oct 2 19:56:49.625748 kubelet[1369]: I1002 19:56:49.625695 1369 scope.go:115] "RemoveContainer" containerID="952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7" Oct 2 19:56:49.629680 env[1043]: time="2023-10-02T19:56:49.629506514Z" level=info msg="RemoveContainer for \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\"" Oct 2 19:56:49.634614 env[1043]: time="2023-10-02T19:56:49.634516242Z" level=info msg="RemoveContainer for \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\"" Oct 2 19:56:49.635536 env[1043]: time="2023-10-02T19:56:49.635383721Z" level=error msg="RemoveContainer for \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\" failed" error="failed to set removing state for container \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\": container is already in removing state" Oct 2 19:56:49.637802 env[1043]: time="2023-10-02T19:56:49.637607918Z" level=info msg="RemoveContainer for \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\" returns successfully" Oct 2 19:56:49.638617 kubelet[1369]: E1002 19:56:49.638539 1369 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7\": container is already in removing state" containerID="952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7" Oct 2 19:56:49.638617 kubelet[1369]: E1002 19:56:49.638649 1369 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "952d72bca3ff7e7210763a6ffcf7bf270913f8727fc7cf18abc8a76dbea5d6c7": container is already in removing state; Skipping pod "cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)" Oct 2 19:56:49.639977 kubelet[1369]: E1002 19:56:49.639938 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:56:50.181372 kubelet[1369]: E1002 19:56:50.181283 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:50.411232 systemd[1]: run-containerd-runc-k8s.io-fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574-runc.LH8o3n.mount: Deactivated successfully. Oct 2 19:56:50.411516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574-rootfs.mount: Deactivated successfully. Oct 2 19:56:51.111754 kubelet[1369]: E1002 19:56:51.111587 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:51.182584 kubelet[1369]: E1002 19:56:51.182448 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:51.243181 kubelet[1369]: E1002 19:56:51.243092 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:52.183666 kubelet[1369]: E1002 19:56:52.183546 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.629286 kubelet[1369]: W1002 19:56:52.629229 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod749d446f_a980_4e1d_bfed_f215397bd061.slice/cri-containerd-fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574.scope WatchSource:0}: task fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574 not found: not found Oct 2 19:56:53.185245 kubelet[1369]: E1002 19:56:53.185169 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:54.186926 kubelet[1369]: E1002 19:56:54.186817 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:55.187309 kubelet[1369]: E1002 19:56:55.187219 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:56.189237 kubelet[1369]: E1002 19:56:56.189144 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:56.245094 kubelet[1369]: E1002 19:56:56.245062 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:57.190067 kubelet[1369]: E1002 19:56:57.189996 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:58.191784 kubelet[1369]: E1002 19:56:58.191713 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:59.193237 kubelet[1369]: E1002 19:56:59.193162 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:00.193760 kubelet[1369]: E1002 19:57:00.193383 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:01.194026 kubelet[1369]: E1002 19:57:01.193966 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:01.247002 kubelet[1369]: E1002 19:57:01.246906 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:02.195608 kubelet[1369]: E1002 19:57:02.195539 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:03.196998 kubelet[1369]: E1002 19:57:03.196919 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:03.391505 kubelet[1369]: E1002 19:57:03.391393 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:57:04.198077 kubelet[1369]: E1002 19:57:04.198022 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:05.199120 kubelet[1369]: E1002 19:57:05.199053 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:06.200649 kubelet[1369]: E1002 19:57:06.200565 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:06.248649 kubelet[1369]: E1002 19:57:06.248603 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:07.202067 kubelet[1369]: E1002 19:57:07.201925 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:08.204152 kubelet[1369]: E1002 19:57:08.204076 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:09.205646 kubelet[1369]: E1002 19:57:09.205566 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:10.206826 kubelet[1369]: E1002 19:57:10.206712 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:11.111284 kubelet[1369]: E1002 19:57:11.111153 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:11.207224 kubelet[1369]: E1002 19:57:11.207168 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:11.249726 kubelet[1369]: E1002 19:57:11.249627 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:12.208947 kubelet[1369]: E1002 19:57:12.208879 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:13.210589 kubelet[1369]: E1002 19:57:13.210514 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:14.211594 kubelet[1369]: E1002 19:57:14.211515 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:14.391049 kubelet[1369]: E1002 19:57:14.390956 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:57:15.212815 kubelet[1369]: E1002 19:57:15.212737 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:16.213349 kubelet[1369]: E1002 19:57:16.213239 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:16.250880 kubelet[1369]: E1002 19:57:16.250780 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:17.213730 kubelet[1369]: E1002 19:57:17.213654 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:18.214291 kubelet[1369]: E1002 19:57:18.214188 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:19.215381 kubelet[1369]: E1002 19:57:19.215275 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:20.216610 kubelet[1369]: E1002 19:57:20.216536 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:21.217362 kubelet[1369]: E1002 19:57:21.217288 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:21.252362 kubelet[1369]: E1002 19:57:21.252319 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:22.219100 kubelet[1369]: E1002 19:57:22.219019 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:23.219960 kubelet[1369]: E1002 19:57:23.219843 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:24.220672 kubelet[1369]: E1002 19:57:24.220594 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:25.222276 kubelet[1369]: E1002 19:57:25.222198 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:26.224033 kubelet[1369]: E1002 19:57:26.223914 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:26.254655 kubelet[1369]: E1002 19:57:26.254570 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:27.224963 kubelet[1369]: E1002 19:57:27.224896 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.391091 kubelet[1369]: E1002 19:57:27.391029 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:57:28.225904 kubelet[1369]: E1002 19:57:28.225822 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:29.227543 kubelet[1369]: E1002 19:57:29.227478 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:30.229282 kubelet[1369]: E1002 19:57:30.229226 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:31.111656 kubelet[1369]: E1002 19:57:31.111526 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:31.230889 kubelet[1369]: E1002 19:57:31.230846 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:31.255882 kubelet[1369]: E1002 19:57:31.255832 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:32.231624 kubelet[1369]: E1002 19:57:32.231567 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:33.232865 kubelet[1369]: E1002 19:57:33.232803 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:34.234525 kubelet[1369]: E1002 19:57:34.234452 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:35.236262 kubelet[1369]: E1002 19:57:35.236142 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:36.236940 kubelet[1369]: E1002 19:57:36.236858 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:36.257687 kubelet[1369]: E1002 19:57:36.257532 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:37.237575 kubelet[1369]: E1002 19:57:37.237510 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:38.239541 kubelet[1369]: E1002 19:57:38.239476 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:38.396101 env[1043]: time="2023-10-02T19:57:38.395961260Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:57:38.416504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115691580.mount: Deactivated successfully. Oct 2 19:57:38.428793 env[1043]: time="2023-10-02T19:57:38.428703561Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\"" Oct 2 19:57:38.430888 env[1043]: time="2023-10-02T19:57:38.430829890Z" level=info msg="StartContainer for \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\"" Oct 2 19:57:38.478050 systemd[1]: Started cri-containerd-00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14.scope. Oct 2 19:57:38.494060 systemd[1]: cri-containerd-00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14.scope: Deactivated successfully. Oct 2 19:57:38.509198 env[1043]: time="2023-10-02T19:57:38.509138534Z" level=info msg="shim disconnected" id=00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14 Oct 2 19:57:38.509198 env[1043]: time="2023-10-02T19:57:38.509191824Z" level=warning msg="cleaning up after shim disconnected" id=00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14 namespace=k8s.io Oct 2 19:57:38.509198 env[1043]: time="2023-10-02T19:57:38.509204278Z" level=info msg="cleaning up dead shim" Oct 2 19:57:38.525031 env[1043]: time="2023-10-02T19:57:38.524971756Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1893 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:38.525579 env[1043]: time="2023-10-02T19:57:38.525519949Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:57:38.526528 env[1043]: time="2023-10-02T19:57:38.526461767Z" level=error msg="Failed to pipe stdout of container \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\"" error="reading from a closed fifo" Oct 2 19:57:38.526607 env[1043]: time="2023-10-02T19:57:38.526477417Z" level=error msg="Failed to pipe stderr of container \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\"" error="reading from a closed fifo" Oct 2 19:57:38.529806 env[1043]: time="2023-10-02T19:57:38.529761268Z" level=error msg="StartContainer for \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:38.530272 kubelet[1369]: E1002 19:57:38.530032 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14" Oct 2 19:57:38.530272 kubelet[1369]: E1002 19:57:38.530197 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:38.530272 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:38.530272 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 19:57:38.530483 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6qmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:38.530562 kubelet[1369]: E1002 19:57:38.530248 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:57:38.765459 kubelet[1369]: I1002 19:57:38.764191 1369 scope.go:115] "RemoveContainer" containerID="fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574" Oct 2 19:57:38.765956 kubelet[1369]: I1002 19:57:38.765889 1369 scope.go:115] "RemoveContainer" containerID="fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574" Oct 2 19:57:38.769139 env[1043]: time="2023-10-02T19:57:38.769020570Z" level=info msg="RemoveContainer for \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\"" Oct 2 19:57:38.772147 env[1043]: time="2023-10-02T19:57:38.772083718Z" level=info msg="RemoveContainer for \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\"" Oct 2 19:57:38.772607 env[1043]: time="2023-10-02T19:57:38.772536243Z" level=error msg="RemoveContainer for \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\" failed" error="failed to set removing state for container \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\": container is already in removing state" Oct 2 19:57:38.773479 kubelet[1369]: E1002 19:57:38.773451 1369 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\": container is already in removing state" containerID="fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574" Oct 2 19:57:38.773784 kubelet[1369]: E1002 19:57:38.773724 1369 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574": container is already in removing state; Skipping pod "cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)" Oct 2 19:57:38.776492 kubelet[1369]: E1002 19:57:38.776454 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:57:38.785724 env[1043]: time="2023-10-02T19:57:38.785486142Z" level=info msg="RemoveContainer for \"fd0e9a4007844e24d3b009e6d98a0a305db3322743e1d98d0b4b0b2308153574\" returns successfully" Oct 2 19:57:39.240924 kubelet[1369]: E1002 19:57:39.240855 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:39.410292 systemd[1]: run-containerd-runc-k8s.io-00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14-runc.3OPgSa.mount: Deactivated successfully. Oct 2 19:57:39.410565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14-rootfs.mount: Deactivated successfully. Oct 2 19:57:40.241469 kubelet[1369]: E1002 19:57:40.241340 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:41.242262 kubelet[1369]: E1002 19:57:41.242168 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:41.259174 kubelet[1369]: E1002 19:57:41.259076 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:41.617454 kubelet[1369]: W1002 19:57:41.617354 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod749d446f_a980_4e1d_bfed_f215397bd061.slice/cri-containerd-00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14.scope WatchSource:0}: task 00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14 not found: not found Oct 2 19:57:42.242960 kubelet[1369]: E1002 19:57:42.242901 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:43.244154 kubelet[1369]: E1002 19:57:43.244102 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:44.245551 kubelet[1369]: E1002 19:57:44.245488 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:45.246920 kubelet[1369]: E1002 19:57:45.246862 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:46.248021 kubelet[1369]: E1002 19:57:46.247964 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:46.260040 kubelet[1369]: E1002 19:57:46.260000 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:47.249925 kubelet[1369]: E1002 19:57:47.249869 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:48.251329 kubelet[1369]: E1002 19:57:48.251241 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:49.253582 kubelet[1369]: E1002 19:57:49.253509 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:50.254222 kubelet[1369]: E1002 19:57:50.254150 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:51.111218 kubelet[1369]: E1002 19:57:51.111118 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:51.255503 kubelet[1369]: E1002 19:57:51.255453 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:51.262717 kubelet[1369]: E1002 19:57:51.262615 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:52.257240 kubelet[1369]: E1002 19:57:52.257179 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:53.258205 kubelet[1369]: E1002 19:57:53.258120 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:54.259817 kubelet[1369]: E1002 19:57:54.259747 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:54.391437 kubelet[1369]: E1002 19:57:54.391286 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:57:55.260027 kubelet[1369]: E1002 19:57:55.259966 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:56.261147 kubelet[1369]: E1002 19:57:56.261087 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:56.264744 kubelet[1369]: E1002 19:57:56.264710 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:57.263117 kubelet[1369]: E1002 19:57:57.263064 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:58.264080 kubelet[1369]: E1002 19:57:58.264019 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:59.265602 kubelet[1369]: E1002 19:57:59.265547 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:00.267204 kubelet[1369]: E1002 19:58:00.267141 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:01.266116 kubelet[1369]: E1002 19:58:01.266016 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:01.268639 kubelet[1369]: E1002 19:58:01.268591 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:02.269542 kubelet[1369]: E1002 19:58:02.269484 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:03.271087 kubelet[1369]: E1002 19:58:03.271028 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:04.272714 kubelet[1369]: E1002 19:58:04.272657 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:05.274620 kubelet[1369]: E1002 19:58:05.274524 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:05.390933 kubelet[1369]: E1002 19:58:05.390880 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:58:06.267597 kubelet[1369]: E1002 19:58:06.267568 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:06.275095 kubelet[1369]: E1002 19:58:06.275068 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.275775 kubelet[1369]: E1002 19:58:07.275714 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:08.277254 kubelet[1369]: E1002 19:58:08.277179 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:09.278826 kubelet[1369]: E1002 19:58:09.278298 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:10.279763 kubelet[1369]: E1002 19:58:10.279703 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:11.111246 kubelet[1369]: E1002 19:58:11.111179 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:11.269079 kubelet[1369]: E1002 19:58:11.268993 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:11.280885 kubelet[1369]: E1002 19:58:11.280798 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.281555 kubelet[1369]: E1002 19:58:12.281496 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.363731 update_engine[1038]: I1002 19:58:12.363601 1038 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:58:12.363731 update_engine[1038]: I1002 19:58:12.363672 1038 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:58:12.365077 update_engine[1038]: I1002 19:58:12.365024 1038 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:58:12.365939 update_engine[1038]: I1002 19:58:12.365891 1038 omaha_request_params.cc:62] Current group set to lts Oct 2 19:58:12.366322 update_engine[1038]: I1002 19:58:12.366271 1038 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:58:12.366322 update_engine[1038]: I1002 19:58:12.366299 1038 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:58:12.367029 update_engine[1038]: I1002 19:58:12.366329 1038 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:58:12.367029 update_engine[1038]: I1002 19:58:12.366381 1038 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:58:12.367832 update_engine[1038]: I1002 19:58:12.367602 1038 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:58:12.367832 update_engine[1038]: I1002 19:58:12.367631 1038 omaha_request_action.cc:269] Request: Oct 2 19:58:12.367832 update_engine[1038]: Oct 2 19:58:12.367832 update_engine[1038]: Oct 2 19:58:12.367832 update_engine[1038]: Oct 2 19:58:12.367832 update_engine[1038]: Oct 2 19:58:12.367832 update_engine[1038]: Oct 2 19:58:12.367832 update_engine[1038]: Oct 2 19:58:12.367832 update_engine[1038]: Oct 2 19:58:12.367832 update_engine[1038]: Oct 2 19:58:12.367832 update_engine[1038]: I1002 19:58:12.367641 1038 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:58:12.368632 locksmithd[1087]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:58:12.369974 update_engine[1038]: I1002 19:58:12.369925 1038 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:58:12.370368 update_engine[1038]: I1002 19:58:12.370322 1038 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:58:13.283047 kubelet[1369]: E1002 19:58:13.282904 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:13.453287 update_engine[1038]: I1002 19:58:13.453218 1038 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:58:13.454469 update_engine[1038]: I1002 19:58:13.454437 1038 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:58:13.455003 update_engine[1038]: I1002 19:58:13.454970 1038 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:58:13.520034 update_engine[1038]: I1002 19:58:13.519992 1038 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:58:13.522966 update_engine[1038]: I1002 19:58:13.522933 1038 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:58:13.523132 update_engine[1038]: I1002 19:58:13.523109 1038 omaha_request_action.cc:619] Omaha request response: Oct 2 19:58:13.523132 update_engine[1038]: Oct 2 19:58:13.536720 update_engine[1038]: I1002 19:58:13.535825 1038 omaha_request_action.cc:409] No update. Oct 2 19:58:13.536996 update_engine[1038]: I1002 19:58:13.536911 1038 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:58:13.537143 update_engine[1038]: I1002 19:58:13.537120 1038 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:58:13.537280 update_engine[1038]: I1002 19:58:13.537257 1038 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:58:13.537440 update_engine[1038]: I1002 19:58:13.537384 1038 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:58:13.537584 update_engine[1038]: I1002 19:58:13.537561 1038 update_attempter.cc:302] Processing Done. Oct 2 19:58:13.537721 update_engine[1038]: I1002 19:58:13.537699 1038 update_attempter.cc:338] No update. Oct 2 19:58:13.537864 update_engine[1038]: I1002 19:58:13.537837 1038 update_check_scheduler.cc:74] Next update check in 43m5s Oct 2 19:58:13.538676 locksmithd[1087]: LastCheckedTime=1696276693 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:58:14.283464 kubelet[1369]: E1002 19:58:14.283370 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:15.285283 kubelet[1369]: E1002 19:58:15.285213 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:16.270708 kubelet[1369]: E1002 19:58:16.270600 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:16.286492 kubelet[1369]: E1002 19:58:16.286452 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:17.287819 kubelet[1369]: E1002 19:58:17.287766 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:18.288943 kubelet[1369]: E1002 19:58:18.288884 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:18.390794 kubelet[1369]: E1002 19:58:18.390711 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:58:19.290694 kubelet[1369]: E1002 19:58:19.290568 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:20.291825 kubelet[1369]: E1002 19:58:20.291750 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:21.272684 kubelet[1369]: E1002 19:58:21.272546 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:21.293590 kubelet[1369]: E1002 19:58:21.293525 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:22.294521 kubelet[1369]: E1002 19:58:22.294445 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:23.295966 kubelet[1369]: E1002 19:58:23.295898 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:24.297692 kubelet[1369]: E1002 19:58:24.297624 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:25.298485 kubelet[1369]: E1002 19:58:25.298380 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:26.274979 kubelet[1369]: E1002 19:58:26.274911 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:26.298604 kubelet[1369]: E1002 19:58:26.298555 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.299696 kubelet[1369]: E1002 19:58:27.299617 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:28.299924 kubelet[1369]: E1002 19:58:28.299869 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:29.301166 kubelet[1369]: E1002 19:58:29.301094 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:30.302159 kubelet[1369]: E1002 19:58:30.302087 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:31.111244 kubelet[1369]: E1002 19:58:31.111193 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:31.276495 kubelet[1369]: E1002 19:58:31.276454 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:31.302347 kubelet[1369]: E1002 19:58:31.302293 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:32.303755 kubelet[1369]: E1002 19:58:32.303702 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:32.390544 kubelet[1369]: E1002 19:58:32.390451 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:58:33.305117 kubelet[1369]: E1002 19:58:33.304955 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:34.305819 kubelet[1369]: E1002 19:58:34.305675 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:35.306994 kubelet[1369]: E1002 19:58:35.306864 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:36.278649 kubelet[1369]: E1002 19:58:36.278554 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:36.307994 kubelet[1369]: E1002 19:58:36.307953 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:37.308540 kubelet[1369]: E1002 19:58:37.308386 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:38.309010 kubelet[1369]: E1002 19:58:38.308933 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:39.310649 kubelet[1369]: E1002 19:58:39.310600 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:40.312291 kubelet[1369]: E1002 19:58:40.312192 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:41.280018 kubelet[1369]: E1002 19:58:41.279890 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:41.313314 kubelet[1369]: E1002 19:58:41.313185 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:42.313824 kubelet[1369]: E1002 19:58:42.313748 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:43.314012 kubelet[1369]: E1002 19:58:43.313953 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:44.315494 kubelet[1369]: E1002 19:58:44.315439 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:45.316840 kubelet[1369]: E1002 19:58:45.316779 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:45.391030 kubelet[1369]: E1002 19:58:45.390988 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:58:46.281652 kubelet[1369]: E1002 19:58:46.281536 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:46.318564 kubelet[1369]: E1002 19:58:46.318507 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.318923 kubelet[1369]: E1002 19:58:47.318781 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:48.319910 kubelet[1369]: E1002 19:58:48.319774 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:49.320999 kubelet[1369]: E1002 19:58:49.320944 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:50.321958 kubelet[1369]: E1002 19:58:50.321900 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:51.111008 kubelet[1369]: E1002 19:58:51.110911 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:51.283089 kubelet[1369]: E1002 19:58:51.282999 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:51.323091 kubelet[1369]: E1002 19:58:51.323018 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:52.323690 kubelet[1369]: E1002 19:58:52.323634 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:53.325503 kubelet[1369]: E1002 19:58:53.325434 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:54.326644 kubelet[1369]: E1002 19:58:54.326522 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:55.327059 kubelet[1369]: E1002 19:58:55.326945 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:56.284815 kubelet[1369]: E1002 19:58:56.284733 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:56.327798 kubelet[1369]: E1002 19:58:56.327716 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:57.328923 kubelet[1369]: E1002 19:58:57.328853 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:58.330745 kubelet[1369]: E1002 19:58:58.330630 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:59.331970 kubelet[1369]: E1002 19:58:59.331802 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:59.398449 env[1043]: time="2023-10-02T19:58:59.398283587Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:58:59.423480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597970414.mount: Deactivated successfully. Oct 2 19:58:59.441123 env[1043]: time="2023-10-02T19:58:59.441005831Z" level=info msg="CreateContainer within sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a\"" Oct 2 19:58:59.442605 env[1043]: time="2023-10-02T19:58:59.442531189Z" level=info msg="StartContainer for \"389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a\"" Oct 2 19:58:59.496387 systemd[1]: Started cri-containerd-389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a.scope. Oct 2 19:58:59.512227 systemd[1]: cri-containerd-389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a.scope: Deactivated successfully. Oct 2 19:58:59.526167 env[1043]: time="2023-10-02T19:58:59.526102188Z" level=info msg="shim disconnected" id=389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a Oct 2 19:58:59.526167 env[1043]: time="2023-10-02T19:58:59.526162691Z" level=warning msg="cleaning up after shim disconnected" id=389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a namespace=k8s.io Oct 2 19:58:59.526167 env[1043]: time="2023-10-02T19:58:59.526176437Z" level=info msg="cleaning up dead shim" Oct 2 19:58:59.536771 env[1043]: time="2023-10-02T19:58:59.536701183Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:58:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1937 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:58:59Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:58:59.537139 env[1043]: time="2023-10-02T19:58:59.537058532Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:58:59.537506 env[1043]: time="2023-10-02T19:58:59.537458752Z" level=error msg="Failed to pipe stderr of container \"389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a\"" error="reading from a closed fifo" Oct 2 19:58:59.541767 env[1043]: time="2023-10-02T19:58:59.541712531Z" level=error msg="Failed to pipe stdout of container \"389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a\"" error="reading from a closed fifo" Oct 2 19:58:59.545802 env[1043]: time="2023-10-02T19:58:59.545725499Z" level=error msg="StartContainer for \"389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:58:59.546600 kubelet[1369]: E1002 19:58:59.546571 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a" Oct 2 19:58:59.546712 kubelet[1369]: E1002 19:58:59.546693 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:58:59.546712 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:58:59.546712 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 19:58:59.546712 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6qmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:58:59.546886 kubelet[1369]: E1002 19:58:59.546753 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:58:59.997209 kubelet[1369]: I1002 19:58:59.997157 1369 scope.go:115] "RemoveContainer" containerID="00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14" Oct 2 19:58:59.998177 kubelet[1369]: I1002 19:58:59.998107 1369 scope.go:115] "RemoveContainer" containerID="00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14" Oct 2 19:59:00.002048 env[1043]: time="2023-10-02T19:59:00.001984786Z" level=info msg="RemoveContainer for \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\"" Oct 2 19:59:00.002796 env[1043]: time="2023-10-02T19:59:00.002644842Z" level=info msg="RemoveContainer for \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\"" Oct 2 19:59:00.004185 env[1043]: time="2023-10-02T19:59:00.003948165Z" level=error msg="RemoveContainer for \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\" failed" error="failed to set removing state for container \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\": container is already in removing state" Oct 2 19:59:00.004682 kubelet[1369]: E1002 19:59:00.004629 1369 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\": container is already in removing state" containerID="00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14" Oct 2 19:59:00.004846 kubelet[1369]: E1002 19:59:00.004713 1369 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14": container is already in removing state; Skipping pod "cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)" Oct 2 19:59:00.005515 kubelet[1369]: E1002 19:59:00.005467 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:59:00.010486 env[1043]: time="2023-10-02T19:59:00.010340249Z" level=info msg="RemoveContainer for \"00d1a9087b61316b6b2143bae88a26c2ea29dec6327da410ff78015bf5b5bc14\" returns successfully" Oct 2 19:59:00.333304 kubelet[1369]: E1002 19:59:00.333122 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:00.416785 systemd[1]: run-containerd-runc-k8s.io-389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a-runc.qab8tt.mount: Deactivated successfully. Oct 2 19:59:00.417049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a-rootfs.mount: Deactivated successfully. Oct 2 19:59:01.286151 kubelet[1369]: E1002 19:59:01.286089 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:01.333387 kubelet[1369]: E1002 19:59:01.333322 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:02.333944 kubelet[1369]: E1002 19:59:02.333847 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:02.634046 kubelet[1369]: W1002 19:59:02.633959 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod749d446f_a980_4e1d_bfed_f215397bd061.slice/cri-containerd-389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a.scope WatchSource:0}: task 389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a not found: not found Oct 2 19:59:03.334709 kubelet[1369]: E1002 19:59:03.334646 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:04.336218 kubelet[1369]: E1002 19:59:04.336159 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:05.337895 kubelet[1369]: E1002 19:59:05.337833 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:06.287360 kubelet[1369]: E1002 19:59:06.287315 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:06.339213 kubelet[1369]: E1002 19:59:06.339117 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.339461 kubelet[1369]: E1002 19:59:07.339244 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:08.339624 kubelet[1369]: E1002 19:59:08.339565 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:09.340592 kubelet[1369]: E1002 19:59:09.340536 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:10.342498 kubelet[1369]: E1002 19:59:10.342385 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:11.110802 kubelet[1369]: E1002 19:59:11.110710 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:11.289247 kubelet[1369]: E1002 19:59:11.289170 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:11.343442 kubelet[1369]: E1002 19:59:11.343370 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:12.344692 kubelet[1369]: E1002 19:59:12.344599 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:13.345301 kubelet[1369]: E1002 19:59:13.345153 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:13.391562 kubelet[1369]: E1002 19:59:13.391481 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:59:14.346453 kubelet[1369]: E1002 19:59:14.346260 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:15.346867 kubelet[1369]: E1002 19:59:15.346788 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:16.290837 kubelet[1369]: E1002 19:59:16.290757 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:16.348013 kubelet[1369]: E1002 19:59:16.347850 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:17.348524 kubelet[1369]: E1002 19:59:17.348359 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:18.348973 kubelet[1369]: E1002 19:59:18.348914 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:19.349899 kubelet[1369]: E1002 19:59:19.349826 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:20.350708 kubelet[1369]: E1002 19:59:20.350601 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:21.292253 kubelet[1369]: E1002 19:59:21.292157 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:21.351659 kubelet[1369]: E1002 19:59:21.351588 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:22.353178 kubelet[1369]: E1002 19:59:22.352990 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:23.353552 kubelet[1369]: E1002 19:59:23.353479 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:24.354678 kubelet[1369]: E1002 19:59:24.354623 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:24.390806 kubelet[1369]: E1002 19:59:24.390716 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-x8vt6_kube-system(749d446f-a980-4e1d-bfed-f215397bd061)\"" pod="kube-system/cilium-x8vt6" podUID=749d446f-a980-4e1d-bfed-f215397bd061 Oct 2 19:59:25.356146 kubelet[1369]: E1002 19:59:25.355934 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:26.293588 kubelet[1369]: E1002 19:59:26.293503 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:26.356861 kubelet[1369]: E1002 19:59:26.356816 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.357018 kubelet[1369]: E1002 19:59:27.356954 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:28.357935 kubelet[1369]: E1002 19:59:28.357873 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:29.359437 kubelet[1369]: E1002 19:59:29.359109 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:30.359620 kubelet[1369]: E1002 19:59:30.359572 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.111136 kubelet[1369]: E1002 19:59:31.111092 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.295948 kubelet[1369]: E1002 19:59:31.295868 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:31.360953 kubelet[1369]: E1002 19:59:31.360915 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:32.362661 kubelet[1369]: E1002 19:59:32.362597 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:32.438239 env[1043]: time="2023-10-02T19:59:32.438098987Z" level=info msg="StopPodSandbox for \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\"" Oct 2 19:59:32.443160 env[1043]: time="2023-10-02T19:59:32.438299876Z" level=info msg="Container to stop \"389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:59:32.441636 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574-shm.mount: Deactivated successfully. Oct 2 19:59:32.456000 audit: BPF prog-id=64 op=UNLOAD Oct 2 19:59:32.457785 systemd[1]: cri-containerd-c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574.scope: Deactivated successfully. Oct 2 19:59:32.460869 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:59:32.461025 kernel: audit: type=1334 audit(1696276772.456:660): prog-id=64 op=UNLOAD Oct 2 19:59:32.465000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:59:32.470607 kernel: audit: type=1334 audit(1696276772.465:661): prog-id=67 op=UNLOAD Oct 2 19:59:32.507202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574-rootfs.mount: Deactivated successfully. Oct 2 19:59:32.519317 env[1043]: time="2023-10-02T19:59:32.519244612Z" level=info msg="shim disconnected" id=c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574 Oct 2 19:59:32.519666 env[1043]: time="2023-10-02T19:59:32.519625941Z" level=warning msg="cleaning up after shim disconnected" id=c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574 namespace=k8s.io Oct 2 19:59:32.520142 env[1043]: time="2023-10-02T19:59:32.520101518Z" level=info msg="cleaning up dead shim" Oct 2 19:59:32.528869 env[1043]: time="2023-10-02T19:59:32.528809117Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1974 runtime=io.containerd.runc.v2\n" Oct 2 19:59:32.529632 env[1043]: time="2023-10-02T19:59:32.529582375Z" level=info msg="TearDown network for sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" successfully" Oct 2 19:59:32.529830 env[1043]: time="2023-10-02T19:59:32.529786700Z" level=info msg="StopPodSandbox for \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" returns successfully" Oct 2 19:59:32.592872 kubelet[1369]: I1002 19:59:32.592737 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-host-proc-sys-net\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.592872 kubelet[1369]: I1002 19:59:32.592815 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-hostproc\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.592872 kubelet[1369]: I1002 19:59:32.592868 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cni-path\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593285 kubelet[1369]: I1002 19:59:32.592918 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-etc-cni-netd\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593285 kubelet[1369]: I1002 19:59:32.592984 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qmwp\" (UniqueName: \"kubernetes.io/projected/749d446f-a980-4e1d-bfed-f215397bd061-kube-api-access-6qmwp\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593285 kubelet[1369]: I1002 19:59:32.593033 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cilium-run\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593285 kubelet[1369]: I1002 19:59:32.593082 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-lib-modules\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593285 kubelet[1369]: I1002 19:59:32.593134 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cilium-cgroup\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593285 kubelet[1369]: I1002 19:59:32.593187 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-host-proc-sys-kernel\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593742 kubelet[1369]: I1002 19:59:32.593237 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-bpf-maps\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593742 kubelet[1369]: I1002 19:59:32.593285 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-xtables-lock\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593742 kubelet[1369]: I1002 19:59:32.593340 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/749d446f-a980-4e1d-bfed-f215397bd061-hubble-tls\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593742 kubelet[1369]: I1002 19:59:32.593438 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/749d446f-a980-4e1d-bfed-f215397bd061-clustermesh-secrets\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.593742 kubelet[1369]: I1002 19:59:32.593533 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/749d446f-a980-4e1d-bfed-f215397bd061-cilium-config-path\") pod \"749d446f-a980-4e1d-bfed-f215397bd061\" (UID: \"749d446f-a980-4e1d-bfed-f215397bd061\") " Oct 2 19:59:32.594068 kubelet[1369]: W1002 19:59:32.593865 1369 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/749d446f-a980-4e1d-bfed-f215397bd061/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:59:32.594295 kubelet[1369]: I1002 19:59:32.594228 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.594442 kubelet[1369]: I1002 19:59:32.594321 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-hostproc" (OuterVolumeSpecName: "hostproc") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.594442 kubelet[1369]: I1002 19:59:32.594363 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cni-path" (OuterVolumeSpecName: "cni-path") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.594607 kubelet[1369]: I1002 19:59:32.594434 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.595336 kubelet[1369]: I1002 19:59:32.595268 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.595617 kubelet[1369]: I1002 19:59:32.595576 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.595851 kubelet[1369]: I1002 19:59:32.595814 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.596054 kubelet[1369]: I1002 19:59:32.596018 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.596199 kubelet[1369]: I1002 19:59:32.595309 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.596337 kubelet[1369]: I1002 19:59:32.595376 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:32.602853 systemd[1]: var-lib-kubelet-pods-749d446f\x2da980\x2d4e1d\x2dbfed\x2df215397bd061-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6qmwp.mount: Deactivated successfully. Oct 2 19:59:32.605044 kubelet[1369]: I1002 19:59:32.604991 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749d446f-a980-4e1d-bfed-f215397bd061-kube-api-access-6qmwp" (OuterVolumeSpecName: "kube-api-access-6qmwp") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "kube-api-access-6qmwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:32.608843 kubelet[1369]: I1002 19:59:32.608754 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/749d446f-a980-4e1d-bfed-f215397bd061-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:59:32.614292 systemd[1]: var-lib-kubelet-pods-749d446f\x2da980\x2d4e1d\x2dbfed\x2df215397bd061-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:59:32.621626 systemd[1]: var-lib-kubelet-pods-749d446f\x2da980\x2d4e1d\x2dbfed\x2df215397bd061-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:59:32.622793 kubelet[1369]: I1002 19:59:32.622736 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749d446f-a980-4e1d-bfed-f215397bd061-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:32.624027 kubelet[1369]: I1002 19:59:32.623962 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749d446f-a980-4e1d-bfed-f215397bd061-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "749d446f-a980-4e1d-bfed-f215397bd061" (UID: "749d446f-a980-4e1d-bfed-f215397bd061"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:59:32.694587 kubelet[1369]: I1002 19:59:32.694514 1369 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/749d446f-a980-4e1d-bfed-f215397bd061-cilium-config-path\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694587 kubelet[1369]: I1002 19:59:32.694576 1369 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/749d446f-a980-4e1d-bfed-f215397bd061-clustermesh-secrets\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694823 kubelet[1369]: I1002 19:59:32.694612 1369 reconciler.go:399] "Volume detached for volume \"kube-api-access-6qmwp\" (UniqueName: \"kubernetes.io/projected/749d446f-a980-4e1d-bfed-f215397bd061-kube-api-access-6qmwp\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694823 kubelet[1369]: I1002 19:59:32.694645 1369 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cilium-run\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694823 kubelet[1369]: I1002 19:59:32.694672 1369 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-lib-modules\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694823 kubelet[1369]: I1002 19:59:32.694699 1369 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-host-proc-sys-net\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694823 kubelet[1369]: I1002 19:59:32.694727 1369 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-hostproc\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694823 kubelet[1369]: I1002 19:59:32.694753 1369 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cni-path\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694823 kubelet[1369]: I1002 19:59:32.694779 1369 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-etc-cni-netd\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694823 kubelet[1369]: I1002 19:59:32.694806 1369 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-cilium-cgroup\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.694823 kubelet[1369]: I1002 19:59:32.694831 1369 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-bpf-maps\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.695656 kubelet[1369]: I1002 19:59:32.694860 1369 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-xtables-lock\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.695656 kubelet[1369]: I1002 19:59:32.694886 1369 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/749d446f-a980-4e1d-bfed-f215397bd061-hubble-tls\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:32.695656 kubelet[1369]: I1002 19:59:32.694913 1369 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/749d446f-a980-4e1d-bfed-f215397bd061-host-proc-sys-kernel\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:33.090729 kubelet[1369]: I1002 19:59:33.090540 1369 scope.go:115] "RemoveContainer" containerID="389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a" Oct 2 19:59:33.099065 env[1043]: time="2023-10-02T19:59:33.099000707Z" level=info msg="RemoveContainer for \"389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a\"" Oct 2 19:59:33.101190 systemd[1]: Removed slice kubepods-burstable-pod749d446f_a980_4e1d_bfed_f215397bd061.slice. Oct 2 19:59:33.108131 env[1043]: time="2023-10-02T19:59:33.108061010Z" level=info msg="RemoveContainer for \"389c7bf590afceeaaf56647ab1adf701c9f3ef11a50c673177bbdfc234be756a\" returns successfully" Oct 2 19:59:33.171479 kubelet[1369]: I1002 19:59:33.171397 1369 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:59:33.171910 kubelet[1369]: E1002 19:59:33.171882 1369 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.172127 kubelet[1369]: E1002 19:59:33.172102 1369 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.172346 kubelet[1369]: E1002 19:59:33.172322 1369 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.172607 kubelet[1369]: E1002 19:59:33.172582 1369 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.172884 kubelet[1369]: I1002 19:59:33.172819 1369 memory_manager.go:345] "RemoveStaleState removing state" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.173088 kubelet[1369]: I1002 19:59:33.173064 1369 memory_manager.go:345] "RemoveStaleState removing state" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.173297 kubelet[1369]: I1002 19:59:33.173273 1369 memory_manager.go:345] "RemoveStaleState removing state" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.173526 kubelet[1369]: I1002 19:59:33.173502 1369 memory_manager.go:345] "RemoveStaleState removing state" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.173741 kubelet[1369]: I1002 19:59:33.173717 1369 memory_manager.go:345] "RemoveStaleState removing state" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.173944 kubelet[1369]: I1002 19:59:33.173920 1369 memory_manager.go:345] "RemoveStaleState removing state" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.174204 kubelet[1369]: E1002 19:59:33.174148 1369 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.174376 kubelet[1369]: E1002 19:59:33.174353 1369 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="749d446f-a980-4e1d-bfed-f215397bd061" containerName="mount-cgroup" Oct 2 19:59:33.186663 systemd[1]: Created slice kubepods-burstable-pod70c2778e_a54b_4d95_b803_3c8a667e57c3.slice. Oct 2 19:59:33.298724 kubelet[1369]: I1002 19:59:33.298679 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-run\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.299369 kubelet[1369]: I1002 19:59:33.299187 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-cgroup\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.299961 kubelet[1369]: I1002 19:59:33.299930 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-host-proc-sys-kernel\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.300361 kubelet[1369]: I1002 19:59:33.300311 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-bpf-maps\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.300789 kubelet[1369]: I1002 19:59:33.300706 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cni-path\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.301287 kubelet[1369]: I1002 19:59:33.301231 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-etc-cni-netd\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.301721 kubelet[1369]: I1002 19:59:33.301695 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70c2778e-a54b-4d95-b803-3c8a667e57c3-clustermesh-secrets\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.302134 kubelet[1369]: I1002 19:59:33.302063 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-hubble-tls\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.302498 kubelet[1369]: I1002 19:59:33.302472 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-hostproc\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.302827 kubelet[1369]: I1002 19:59:33.302802 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-xtables-lock\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.303164 kubelet[1369]: I1002 19:59:33.303139 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-host-proc-sys-net\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.303542 kubelet[1369]: I1002 19:59:33.303516 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dgp4\" (UniqueName: \"kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.303948 kubelet[1369]: I1002 19:59:33.303855 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-lib-modules\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.304259 kubelet[1369]: I1002 19:59:33.304234 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-config-path\") pod \"cilium-dx425\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " pod="kube-system/cilium-dx425" Oct 2 19:59:33.364362 kubelet[1369]: E1002 19:59:33.364326 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:33.393678 env[1043]: time="2023-10-02T19:59:33.393081582Z" level=info msg="StopPodSandbox for \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\"" Oct 2 19:59:33.393678 env[1043]: time="2023-10-02T19:59:33.393322506Z" level=info msg="TearDown network for sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" successfully" Oct 2 19:59:33.393678 env[1043]: time="2023-10-02T19:59:33.393490423Z" level=info msg="StopPodSandbox for \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" returns successfully" Oct 2 19:59:33.396178 kubelet[1369]: I1002 19:59:33.396144 1369 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=749d446f-a980-4e1d-bfed-f215397bd061 path="/var/lib/kubelet/pods/749d446f-a980-4e1d-bfed-f215397bd061/volumes" Oct 2 19:59:33.423345 kubelet[1369]: E1002 19:59:33.423280 1369 projected.go:196] Error preparing data for projected volume kube-api-access-8dgp4 for pod kube-system/cilium-dx425: failed to fetch token: serviceaccounts "cilium" not found Oct 2 19:59:33.423635 kubelet[1369]: E1002 19:59:33.423455 1369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4 podName:70c2778e-a54b-4d95-b803-3c8a667e57c3 nodeName:}" failed. No retries permitted until 2023-10-02 19:59:33.923374811 +0000 UTC m=+243.641566158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8dgp4" (UniqueName: "kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4") pod "cilium-dx425" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3") : failed to fetch token: serviceaccounts "cilium" not found Oct 2 19:59:34.014589 kubelet[1369]: E1002 19:59:34.014533 1369 projected.go:196] Error preparing data for projected volume kube-api-access-8dgp4 for pod kube-system/cilium-dx425: failed to fetch token: serviceaccounts "cilium" not found Oct 2 19:59:34.015079 kubelet[1369]: E1002 19:59:34.015045 1369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4 podName:70c2778e-a54b-4d95-b803-3c8a667e57c3 nodeName:}" failed. No retries permitted until 2023-10-02 19:59:35.014994202 +0000 UTC m=+244.733185550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dgp4" (UniqueName: "kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4") pod "cilium-dx425" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3") : failed to fetch token: serviceaccounts "cilium" not found Oct 2 19:59:34.365646 kubelet[1369]: E1002 19:59:34.365607 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:35.024828 kubelet[1369]: E1002 19:59:35.024746 1369 projected.go:196] Error preparing data for projected volume kube-api-access-8dgp4 for pod kube-system/cilium-dx425: failed to fetch token: serviceaccounts "cilium" not found Oct 2 19:59:35.025087 kubelet[1369]: E1002 19:59:35.024866 1369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4 podName:70c2778e-a54b-4d95-b803-3c8a667e57c3 nodeName:}" failed. No retries permitted until 2023-10-02 19:59:37.024831333 +0000 UTC m=+246.743022680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dgp4" (UniqueName: "kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4") pod "cilium-dx425" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3") : failed to fetch token: serviceaccounts "cilium" not found Oct 2 19:59:35.366564 kubelet[1369]: E1002 19:59:35.366475 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:36.297293 kubelet[1369]: E1002 19:59:36.297248 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:36.366749 kubelet[1369]: E1002 19:59:36.366620 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:37.040297 kubelet[1369]: E1002 19:59:37.040245 1369 projected.go:196] Error preparing data for projected volume kube-api-access-8dgp4 for pod kube-system/cilium-dx425: failed to fetch token: serviceaccounts "cilium" not found Oct 2 19:59:37.041337 kubelet[1369]: E1002 19:59:37.041307 1369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4 podName:70c2778e-a54b-4d95-b803-3c8a667e57c3 nodeName:}" failed. No retries permitted until 2023-10-02 19:59:41.040729994 +0000 UTC m=+250.758921341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dgp4" (UniqueName: "kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4") pod "cilium-dx425" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3") : failed to fetch token: serviceaccounts "cilium" not found Oct 2 19:59:37.367126 kubelet[1369]: E1002 19:59:37.367044 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:38.367985 kubelet[1369]: E1002 19:59:38.367931 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:38.926899 kubelet[1369]: I1002 19:59:38.926843 1369 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:59:38.940200 systemd[1]: Created slice kubepods-besteffort-podefd1f3b2_c632_4c1d_b5f1_bf0291649db3.slice. Oct 2 19:59:39.052803 kubelet[1369]: I1002 19:59:39.052732 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hf4\" (UniqueName: \"kubernetes.io/projected/efd1f3b2-c632-4c1d-b5f1-bf0291649db3-kube-api-access-x8hf4\") pod \"cilium-operator-69b677f97c-hch2g\" (UID: \"efd1f3b2-c632-4c1d-b5f1-bf0291649db3\") " pod="kube-system/cilium-operator-69b677f97c-hch2g" Oct 2 19:59:39.054295 kubelet[1369]: I1002 19:59:39.054205 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efd1f3b2-c632-4c1d-b5f1-bf0291649db3-cilium-config-path\") pod \"cilium-operator-69b677f97c-hch2g\" (UID: \"efd1f3b2-c632-4c1d-b5f1-bf0291649db3\") " pod="kube-system/cilium-operator-69b677f97c-hch2g" Oct 2 19:59:39.249470 env[1043]: time="2023-10-02T19:59:39.247967097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-hch2g,Uid:efd1f3b2-c632-4c1d-b5f1-bf0291649db3,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:39.282890 env[1043]: time="2023-10-02T19:59:39.282736281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:39.282890 env[1043]: time="2023-10-02T19:59:39.282821632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:39.283352 env[1043]: time="2023-10-02T19:59:39.282853652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:39.283995 env[1043]: time="2023-10-02T19:59:39.283756954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2 pid=2003 runtime=io.containerd.runc.v2 Oct 2 19:59:39.324969 systemd[1]: run-containerd-runc-k8s.io-040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2-runc.JvezP1.mount: Deactivated successfully. Oct 2 19:59:39.333113 systemd[1]: Started cri-containerd-040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2.scope. Oct 2 19:59:39.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.355595 kernel: audit: type=1400 audit(1696276779.349:662): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.362744 kernel: audit: type=1400 audit(1696276779.349:663): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.362779 kernel: audit: type=1400 audit(1696276779.349:664): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.366339 kernel: audit: type=1400 audit(1696276779.349:665): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.366382 kernel: audit: type=1400 audit(1696276779.349:666): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.370255 kubelet[1369]: E1002 19:59:39.370173 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:39.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.374230 kernel: audit: type=1400 audit(1696276779.349:667): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.374288 kernel: audit: type=1400 audit(1696276779.349:668): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.381318 kernel: audit: type=1400 audit(1696276779.349:669): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.381394 kernel: audit: type=1400 audit(1696276779.349:670): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.350000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.350000 audit: BPF prog-id=75 op=LOAD Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00011fc48 a2=10 a3=1c items=0 ppid=2003 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:39.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306330646336623338343264656136633438323838386638373664 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00011f6b0 a2=3c a3=c items=0 ppid=2003 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:39.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306330646336623338343264656136633438323838386638373664 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.395428 kernel: audit: type=1400 audit(1696276779.350:671): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.354000 audit: BPF prog-id=76 op=LOAD Oct 2 19:59:39.354000 audit[2014]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00011f9d8 a2=78 a3=c0003087d0 items=0 ppid=2003 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:39.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306330646336623338343264656136633438323838386638373664 Oct 2 19:59:39.361000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.361000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.361000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.361000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.361000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.361000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.361000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.361000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.361000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.361000 audit: BPF prog-id=77 op=LOAD Oct 2 19:59:39.361000 audit[2014]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00011f770 a2=78 a3=c000308818 items=0 ppid=2003 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:39.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306330646336623338343264656136633438323838386638373664 Oct 2 19:59:39.365000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:59:39.365000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { perfmon } for pid=2014 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit[2014]: AVC avc: denied { bpf } for pid=2014 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:39.365000 audit: BPF prog-id=78 op=LOAD Oct 2 19:59:39.365000 audit[2014]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00011fc30 a2=78 a3=c000308c28 items=0 ppid=2003 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:39.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034306330646336623338343264656136633438323838386638373664 Oct 2 19:59:39.416167 env[1043]: time="2023-10-02T19:59:39.416118963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-hch2g,Uid:efd1f3b2-c632-4c1d-b5f1-bf0291649db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2\"" Oct 2 19:59:39.418025 env[1043]: time="2023-10-02T19:59:39.417975762Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:59:40.370396 kubelet[1369]: E1002 19:59:40.370310 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:40.880816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867486514.mount: Deactivated successfully. Oct 2 19:59:41.298477 kubelet[1369]: E1002 19:59:41.298380 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:41.302658 env[1043]: time="2023-10-02T19:59:41.302589742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dx425,Uid:70c2778e-a54b-4d95-b803-3c8a667e57c3,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:41.350960 env[1043]: time="2023-10-02T19:59:41.350844829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:41.351065 env[1043]: time="2023-10-02T19:59:41.350994361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:41.351122 env[1043]: time="2023-10-02T19:59:41.351071667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:41.351499 env[1043]: time="2023-10-02T19:59:41.351394184Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a pid=2046 runtime=io.containerd.runc.v2 Oct 2 19:59:41.370584 kubelet[1369]: E1002 19:59:41.370541 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:41.400798 systemd[1]: Started cri-containerd-bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a.scope. Oct 2 19:59:41.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.414000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.414000 audit: BPF prog-id=79 op=LOAD Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2046 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:41.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363631366465666364383538313065303431316538363739323832 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2046 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:41.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363631366465666364383538313065303431316538363739323832 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit: BPF prog-id=80 op=LOAD Oct 2 19:59:41.415000 audit[2056]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002ec850 items=0 ppid=2046 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:41.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363631366465666364383538313065303431316538363739323832 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit: BPF prog-id=81 op=LOAD Oct 2 19:59:41.415000 audit[2056]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0002ec898 items=0 ppid=2046 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:41.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363631366465666364383538313065303431316538363739323832 Oct 2 19:59:41.415000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:59:41.415000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { perfmon } for pid=2056 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit[2056]: AVC avc: denied { bpf } for pid=2056 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:41.415000 audit: BPF prog-id=82 op=LOAD Oct 2 19:59:41.415000 audit[2056]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0002ecca8 items=0 ppid=2046 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:41.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262363631366465666364383538313065303431316538363739323832 Oct 2 19:59:41.430616 env[1043]: time="2023-10-02T19:59:41.430562792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dx425,Uid:70c2778e-a54b-4d95-b803-3c8a667e57c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\"" Oct 2 19:59:41.433372 env[1043]: time="2023-10-02T19:59:41.433344925Z" level=info msg="CreateContainer within sandbox \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:59:41.464639 env[1043]: time="2023-10-02T19:59:41.464590722Z" level=info msg="CreateContainer within sandbox \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a\"" Oct 2 19:59:41.465502 env[1043]: time="2023-10-02T19:59:41.465467984Z" level=info msg="StartContainer for \"490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a\"" Oct 2 19:59:41.494209 systemd[1]: Started cri-containerd-490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a.scope. Oct 2 19:59:41.510117 systemd[1]: cri-containerd-490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a.scope: Deactivated successfully. Oct 2 19:59:41.642459 env[1043]: time="2023-10-02T19:59:41.642332113Z" level=info msg="shim disconnected" id=490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a Oct 2 19:59:41.642818 env[1043]: time="2023-10-02T19:59:41.642467087Z" level=warning msg="cleaning up after shim disconnected" id=490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a namespace=k8s.io Oct 2 19:59:41.642818 env[1043]: time="2023-10-02T19:59:41.642497565Z" level=info msg="cleaning up dead shim" Oct 2 19:59:41.670877 env[1043]: time="2023-10-02T19:59:41.670758747Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2104 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:41.671359 env[1043]: time="2023-10-02T19:59:41.671247137Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Oct 2 19:59:41.673326 env[1043]: time="2023-10-02T19:59:41.673237448Z" level=error msg="Failed to pipe stderr of container \"490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a\"" error="reading from a closed fifo" Oct 2 19:59:41.673510 env[1043]: time="2023-10-02T19:59:41.673360310Z" level=error msg="Failed to pipe stdout of container \"490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a\"" error="reading from a closed fifo" Oct 2 19:59:41.677880 env[1043]: time="2023-10-02T19:59:41.677788924Z" level=error msg="StartContainer for \"490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:41.678317 kubelet[1369]: E1002 19:59:41.678266 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a" Oct 2 19:59:41.679025 kubelet[1369]: E1002 19:59:41.678704 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:41.679025 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:41.679025 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 19:59:41.679025 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8dgp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dx425_kube-system(70c2778e-a54b-4d95-b803-3c8a667e57c3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:41.679550 kubelet[1369]: E1002 19:59:41.678843 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dx425" podUID=70c2778e-a54b-4d95-b803-3c8a667e57c3 Oct 2 19:59:42.120667 env[1043]: time="2023-10-02T19:59:42.120600550Z" level=info msg="StopPodSandbox for \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\"" Oct 2 19:59:42.120798 env[1043]: time="2023-10-02T19:59:42.120712792Z" level=info msg="Container to stop \"490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:59:42.126917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a-shm.mount: Deactivated successfully. Oct 2 19:59:42.141177 systemd[1]: cri-containerd-bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a.scope: Deactivated successfully. Oct 2 19:59:42.140000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:59:42.145000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:59:42.197728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a-rootfs.mount: Deactivated successfully. Oct 2 19:59:42.321676 env[1043]: time="2023-10-02T19:59:42.321589645Z" level=info msg="shim disconnected" id=bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a Oct 2 19:59:42.322557 env[1043]: time="2023-10-02T19:59:42.322481676Z" level=warning msg="cleaning up after shim disconnected" id=bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a namespace=k8s.io Oct 2 19:59:42.322734 env[1043]: time="2023-10-02T19:59:42.322697933Z" level=info msg="cleaning up dead shim" Oct 2 19:59:42.323343 env[1043]: time="2023-10-02T19:59:42.323293556Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:42.328908 env[1043]: time="2023-10-02T19:59:42.328818205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:42.332310 env[1043]: time="2023-10-02T19:59:42.332230535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:42.335240 env[1043]: time="2023-10-02T19:59:42.335173129Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 19:59:42.340756 env[1043]: time="2023-10-02T19:59:42.340665668Z" level=info msg="CreateContainer within sandbox \"040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:59:42.365888 env[1043]: time="2023-10-02T19:59:42.365800617Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2135 runtime=io.containerd.runc.v2\n" Oct 2 19:59:42.366705 env[1043]: time="2023-10-02T19:59:42.366648815Z" level=info msg="TearDown network for sandbox \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\" successfully" Oct 2 19:59:42.366959 env[1043]: time="2023-10-02T19:59:42.366879129Z" level=info msg="StopPodSandbox for \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\" returns successfully" Oct 2 19:59:42.374607 kubelet[1369]: E1002 19:59:42.371731 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:42.389949 env[1043]: time="2023-10-02T19:59:42.389816307Z" level=info msg="CreateContainer within sandbox \"040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\"" Oct 2 19:59:42.391267 env[1043]: time="2023-10-02T19:59:42.391207548Z" level=info msg="StartContainer for \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\"" Oct 2 19:59:42.425446 systemd[1]: Started cri-containerd-219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952.scope. Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.450000 audit: BPF prog-id=83 op=LOAD Oct 2 19:59:42.451000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.451000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=2003 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:42.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231396234303662333831373338653965323265333466636531323532 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=2003 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:42.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231396234303662333831373338653965323265333466636531323532 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.452000 audit: BPF prog-id=84 op=LOAD Oct 2 19:59:42.452000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014d9d8 a2=78 a3=c00018f940 items=0 ppid=2003 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:42.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231396234303662333831373338653965323265333466636531323532 Oct 2 19:59:42.453000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.453000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.453000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.453000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.453000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.453000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.453000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.453000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.453000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.453000 audit: BPF prog-id=85 op=LOAD Oct 2 19:59:42.453000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00014d770 a2=78 a3=c00018f988 items=0 ppid=2003 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:42.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231396234303662333831373338653965323265333466636531323532 Oct 2 19:59:42.454000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:59:42.454000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:42.454000 audit: BPF prog-id=86 op=LOAD Oct 2 19:59:42.454000 audit[2155]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014dc30 a2=78 a3=c00018fd98 items=0 ppid=2003 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:42.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231396234303662333831373338653965323265333466636531323532 Oct 2 19:59:42.474583 env[1043]: time="2023-10-02T19:59:42.474536816Z" level=info msg="StartContainer for \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\" returns successfully" Oct 2 19:59:42.487371 kubelet[1369]: I1002 19:59:42.486834 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-config-path\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.487371 kubelet[1369]: I1002 19:59:42.486884 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-run\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.487371 kubelet[1369]: I1002 19:59:42.486907 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cni-path\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.487371 kubelet[1369]: I1002 19:59:42.486929 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-xtables-lock\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.487371 kubelet[1369]: I1002 19:59:42.486952 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-host-proc-sys-net\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.487371 kubelet[1369]: I1002 19:59:42.486973 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-lib-modules\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.487641 kubelet[1369]: I1002 19:59:42.486995 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-cgroup\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.487641 kubelet[1369]: I1002 19:59:42.487017 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-hostproc\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.487641 kubelet[1369]: I1002 19:59:42.487042 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-hubble-tls\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.487641 kubelet[1369]: W1002 19:59:42.487048 1369 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/70c2778e-a54b-4d95-b803-3c8a667e57c3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:59:42.490748 kubelet[1369]: I1002 19:59:42.489051 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:59:42.490748 kubelet[1369]: I1002 19:59:42.487072 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dgp4\" (UniqueName: \"kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.490748 kubelet[1369]: I1002 19:59:42.489142 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-host-proc-sys-kernel\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.490748 kubelet[1369]: I1002 19:59:42.489189 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-bpf-maps\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.490748 kubelet[1369]: I1002 19:59:42.489213 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-etc-cni-netd\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.490748 kubelet[1369]: I1002 19:59:42.489258 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70c2778e-a54b-4d95-b803-3c8a667e57c3-clustermesh-secrets\") pod \"70c2778e-a54b-4d95-b803-3c8a667e57c3\" (UID: \"70c2778e-a54b-4d95-b803-3c8a667e57c3\") " Oct 2 19:59:42.490963 kubelet[1369]: I1002 19:59:42.489297 1369 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-config-path\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.490963 kubelet[1369]: I1002 19:59:42.489548 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.490963 kubelet[1369]: I1002 19:59:42.489600 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.490963 kubelet[1369]: I1002 19:59:42.489622 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.490963 kubelet[1369]: I1002 19:59:42.489643 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.491108 kubelet[1369]: I1002 19:59:42.489660 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.491108 kubelet[1369]: I1002 19:59:42.489703 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cni-path" (OuterVolumeSpecName: "cni-path") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.491108 kubelet[1369]: I1002 19:59:42.489742 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.491108 kubelet[1369]: I1002 19:59:42.489781 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.491108 kubelet[1369]: I1002 19:59:42.489797 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.491247 kubelet[1369]: I1002 19:59:42.489813 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-hostproc" (OuterVolumeSpecName: "hostproc") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:42.493576 kubelet[1369]: I1002 19:59:42.493535 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4" (OuterVolumeSpecName: "kube-api-access-8dgp4") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "kube-api-access-8dgp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:42.496439 kubelet[1369]: I1002 19:59:42.495449 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c2778e-a54b-4d95-b803-3c8a667e57c3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:59:42.497686 kubelet[1369]: I1002 19:59:42.497655 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70c2778e-a54b-4d95-b803-3c8a667e57c3" (UID: "70c2778e-a54b-4d95-b803-3c8a667e57c3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:42.497000 audit[2166]: AVC avc: denied { map_create } for pid=2166 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c597,c749 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c597,c749 tclass=bpf permissive=0 Oct 2 19:59:42.497000 audit[2166]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c00052d7d0 a2=48 a3=c00052d7c0 items=0 ppid=2003 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c597,c749 key=(null) Oct 2 19:59:42.497000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:59:42.590172 kubelet[1369]: I1002 19:59:42.590093 1369 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70c2778e-a54b-4d95-b803-3c8a667e57c3-clustermesh-secrets\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.590172 kubelet[1369]: I1002 19:59:42.590162 1369 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-hubble-tls\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.590562 kubelet[1369]: I1002 19:59:42.590200 1369 reconciler.go:399] "Volume detached for volume \"kube-api-access-8dgp4\" (UniqueName: \"kubernetes.io/projected/70c2778e-a54b-4d95-b803-3c8a667e57c3-kube-api-access-8dgp4\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.590562 kubelet[1369]: I1002 19:59:42.590233 1369 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-host-proc-sys-kernel\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.590562 kubelet[1369]: I1002 19:59:42.590261 1369 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-bpf-maps\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.590562 kubelet[1369]: I1002 19:59:42.590315 1369 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-etc-cni-netd\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.590562 kubelet[1369]: I1002 19:59:42.590346 1369 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-run\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.590562 kubelet[1369]: I1002 19:59:42.590373 1369 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cni-path\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.590562 kubelet[1369]: I1002 19:59:42.590421 1369 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-xtables-lock\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.590562 kubelet[1369]: I1002 19:59:42.590450 1369 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-hostproc\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.591060 kubelet[1369]: I1002 19:59:42.590477 1369 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-host-proc-sys-net\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.591060 kubelet[1369]: I1002 19:59:42.590503 1369 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-lib-modules\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.591060 kubelet[1369]: I1002 19:59:42.590530 1369 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70c2778e-a54b-4d95-b803-3c8a667e57c3-cilium-cgroup\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 19:59:42.872714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount812535471.mount: Deactivated successfully. Oct 2 19:59:42.873255 systemd[1]: var-lib-kubelet-pods-70c2778e\x2da54b\x2d4d95\x2db803\x2d3c8a667e57c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8dgp4.mount: Deactivated successfully. Oct 2 19:59:42.873680 systemd[1]: var-lib-kubelet-pods-70c2778e\x2da54b\x2d4d95\x2db803\x2d3c8a667e57c3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:59:42.874028 systemd[1]: var-lib-kubelet-pods-70c2778e\x2da54b\x2d4d95\x2db803\x2d3c8a667e57c3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:59:43.128459 kubelet[1369]: I1002 19:59:43.128241 1369 scope.go:115] "RemoveContainer" containerID="490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a" Oct 2 19:59:43.133767 env[1043]: time="2023-10-02T19:59:43.133685404Z" level=info msg="RemoveContainer for \"490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a\"" Oct 2 19:59:43.138130 systemd[1]: Removed slice kubepods-burstable-pod70c2778e_a54b_4d95_b803_3c8a667e57c3.slice. Oct 2 19:59:43.148955 env[1043]: time="2023-10-02T19:59:43.148844855Z" level=info msg="RemoveContainer for \"490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a\" returns successfully" Oct 2 19:59:43.191775 kubelet[1369]: I1002 19:59:43.191717 1369 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:59:43.192054 kubelet[1369]: E1002 19:59:43.191801 1369 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="70c2778e-a54b-4d95-b803-3c8a667e57c3" containerName="mount-cgroup" Oct 2 19:59:43.192054 kubelet[1369]: I1002 19:59:43.191868 1369 memory_manager.go:345] "RemoveStaleState removing state" podUID="70c2778e-a54b-4d95-b803-3c8a667e57c3" containerName="mount-cgroup" Oct 2 19:59:43.204561 systemd[1]: Created slice kubepods-burstable-pod6cd12143_3890_47c3_bd14_1e561c5d32bd.slice. Oct 2 19:59:43.294308 kubelet[1369]: I1002 19:59:43.294239 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cni-path\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.294769 kubelet[1369]: I1002 19:59:43.294736 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-xtables-lock\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.295077 kubelet[1369]: I1002 19:59:43.295037 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-bpf-maps\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.295390 kubelet[1369]: I1002 19:59:43.295351 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-etc-cni-netd\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.295758 kubelet[1369]: I1002 19:59:43.295727 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cd12143-3890-47c3-bd14-1e561c5d32bd-clustermesh-secrets\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.296050 kubelet[1369]: I1002 19:59:43.296022 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cd12143-3890-47c3-bd14-1e561c5d32bd-hubble-tls\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.296289 kubelet[1369]: I1002 19:59:43.296263 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49pm5\" (UniqueName: \"kubernetes.io/projected/6cd12143-3890-47c3-bd14-1e561c5d32bd-kube-api-access-49pm5\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.297036 kubelet[1369]: I1002 19:59:43.296827 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-hostproc\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.297580 kubelet[1369]: I1002 19:59:43.297551 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-cgroup\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.297814 kubelet[1369]: I1002 19:59:43.297788 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-config-path\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.298046 kubelet[1369]: I1002 19:59:43.298004 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-host-proc-sys-net\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.298284 kubelet[1369]: I1002 19:59:43.298258 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-run\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.298592 kubelet[1369]: I1002 19:59:43.298564 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-ipsec-secrets\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.298829 kubelet[1369]: I1002 19:59:43.298788 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-host-proc-sys-kernel\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.299073 kubelet[1369]: I1002 19:59:43.299047 1369 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-lib-modules\") pod \"cilium-kdc57\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " pod="kube-system/cilium-kdc57" Oct 2 19:59:43.372293 kubelet[1369]: E1002 19:59:43.372239 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:43.395786 kubelet[1369]: I1002 19:59:43.395621 1369 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=70c2778e-a54b-4d95-b803-3c8a667e57c3 path="/var/lib/kubelet/pods/70c2778e-a54b-4d95-b803-3c8a667e57c3/volumes" Oct 2 19:59:43.519485 env[1043]: time="2023-10-02T19:59:43.519223989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdc57,Uid:6cd12143-3890-47c3-bd14-1e561c5d32bd,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:43.548388 env[1043]: time="2023-10-02T19:59:43.547933319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:43.548388 env[1043]: time="2023-10-02T19:59:43.548038898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:43.548388 env[1043]: time="2023-10-02T19:59:43.548075388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:43.548883 env[1043]: time="2023-10-02T19:59:43.548499346Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d pid=2200 runtime=io.containerd.runc.v2 Oct 2 19:59:43.578890 systemd[1]: Started cri-containerd-e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d.scope. Oct 2 19:59:43.602000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.602000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.602000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.602000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.602000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.602000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.602000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.602000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.602000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.603000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.603000 audit: BPF prog-id=87 op=LOAD Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2200 pid=2210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:43.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539303938633438643762636539646363653161666662373636656331 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2200 pid=2210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:43.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539303938633438643762636539646363653161666662373636656331 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit: BPF prog-id=88 op=LOAD Oct 2 19:59:43.604000 audit[2210]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000292ae0 items=0 ppid=2200 pid=2210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:43.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539303938633438643762636539646363653161666662373636656331 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.604000 audit: BPF prog-id=89 op=LOAD Oct 2 19:59:43.604000 audit[2210]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000292b28 items=0 ppid=2200 pid=2210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:43.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539303938633438643762636539646363653161666662373636656331 Oct 2 19:59:43.605000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:59:43.605000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { perfmon } for pid=2210 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit[2210]: AVC avc: denied { bpf } for pid=2210 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:43.605000 audit: BPF prog-id=90 op=LOAD Oct 2 19:59:43.605000 audit[2210]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000292f38 items=0 ppid=2200 pid=2210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:43.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539303938633438643762636539646363653161666662373636656331 Oct 2 19:59:43.632337 env[1043]: time="2023-10-02T19:59:43.632220750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdc57,Uid:6cd12143-3890-47c3-bd14-1e561c5d32bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\"" Oct 2 19:59:43.637457 env[1043]: time="2023-10-02T19:59:43.637424444Z" level=info msg="CreateContainer within sandbox \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:59:43.662218 env[1043]: time="2023-10-02T19:59:43.662018981Z" level=info msg="CreateContainer within sandbox \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\"" Oct 2 19:59:43.664154 env[1043]: time="2023-10-02T19:59:43.664105722Z" level=info msg="StartContainer for \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\"" Oct 2 19:59:43.692136 systemd[1]: Started cri-containerd-24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92.scope. Oct 2 19:59:43.707567 systemd[1]: cri-containerd-24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92.scope: Deactivated successfully. Oct 2 19:59:43.731142 env[1043]: time="2023-10-02T19:59:43.731096114Z" level=info msg="shim disconnected" id=24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92 Oct 2 19:59:43.731359 env[1043]: time="2023-10-02T19:59:43.731338861Z" level=warning msg="cleaning up after shim disconnected" id=24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92 namespace=k8s.io Oct 2 19:59:43.731464 env[1043]: time="2023-10-02T19:59:43.731446394Z" level=info msg="cleaning up dead shim" Oct 2 19:59:43.745749 env[1043]: time="2023-10-02T19:59:43.745721379Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2259 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:43.746046 env[1043]: time="2023-10-02T19:59:43.746004842Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:59:43.746577 env[1043]: time="2023-10-02T19:59:43.746497832Z" level=error msg="Failed to pipe stdout of container \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\"" error="reading from a closed fifo" Oct 2 19:59:43.748542 env[1043]: time="2023-10-02T19:59:43.748482210Z" level=error msg="Failed to pipe stderr of container \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\"" error="reading from a closed fifo" Oct 2 19:59:43.752186 env[1043]: time="2023-10-02T19:59:43.752152515Z" level=error msg="StartContainer for \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:43.752640 kubelet[1369]: E1002 19:59:43.752458 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92" Oct 2 19:59:43.752640 kubelet[1369]: E1002 19:59:43.752572 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:43.752640 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:43.752640 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 19:59:43.752819 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-49pm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:43.752914 kubelet[1369]: E1002 19:59:43.752617 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kdc57" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd Oct 2 19:59:44.151184 env[1043]: time="2023-10-02T19:59:44.151074325Z" level=info msg="CreateContainer within sandbox \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:59:44.180588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287391317.mount: Deactivated successfully. Oct 2 19:59:44.189798 env[1043]: time="2023-10-02T19:59:44.189726763Z" level=info msg="CreateContainer within sandbox \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\"" Oct 2 19:59:44.191337 env[1043]: time="2023-10-02T19:59:44.191206971Z" level=info msg="StartContainer for \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\"" Oct 2 19:59:44.239768 systemd[1]: Started cri-containerd-59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b.scope. Oct 2 19:59:44.252911 systemd[1]: cri-containerd-59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b.scope: Deactivated successfully. Oct 2 19:59:44.264837 env[1043]: time="2023-10-02T19:59:44.264792789Z" level=info msg="shim disconnected" id=59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b Oct 2 19:59:44.265037 env[1043]: time="2023-10-02T19:59:44.265018434Z" level=warning msg="cleaning up after shim disconnected" id=59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b namespace=k8s.io Oct 2 19:59:44.265114 env[1043]: time="2023-10-02T19:59:44.265099407Z" level=info msg="cleaning up dead shim" Oct 2 19:59:44.272915 env[1043]: time="2023-10-02T19:59:44.272872530Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2298 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:44.273281 env[1043]: time="2023-10-02T19:59:44.273231166Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:59:44.276491 env[1043]: time="2023-10-02T19:59:44.276458876Z" level=error msg="Failed to pipe stdout of container \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\"" error="reading from a closed fifo" Oct 2 19:59:44.276606 env[1043]: time="2023-10-02T19:59:44.276580345Z" level=error msg="Failed to pipe stderr of container \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\"" error="reading from a closed fifo" Oct 2 19:59:44.280468 env[1043]: time="2023-10-02T19:59:44.280396785Z" level=error msg="StartContainer for \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:44.281277 kubelet[1369]: E1002 19:59:44.280698 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b" Oct 2 19:59:44.281277 kubelet[1369]: E1002 19:59:44.280814 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:44.281277 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:44.281277 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 19:59:44.281476 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-49pm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:44.281547 kubelet[1369]: E1002 19:59:44.280873 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kdc57" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd Oct 2 19:59:44.373176 kubelet[1369]: E1002 19:59:44.373077 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:44.751733 kubelet[1369]: W1002 19:59:44.750565 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c2778e_a54b_4d95_b803_3c8a667e57c3.slice/cri-containerd-490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a.scope WatchSource:0}: container "490a36b30a0289ab33dc5c713db0d7ece647200f67642a35b4f31bce39973e6a" in namespace "k8s.io": not found Oct 2 19:59:44.870844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b-rootfs.mount: Deactivated successfully. Oct 2 19:59:45.147206 kubelet[1369]: I1002 19:59:45.147155 1369 scope.go:115] "RemoveContainer" containerID="24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92" Oct 2 19:59:45.147890 kubelet[1369]: I1002 19:59:45.147846 1369 scope.go:115] "RemoveContainer" containerID="24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92" Oct 2 19:59:45.151582 env[1043]: time="2023-10-02T19:59:45.150757714Z" level=info msg="RemoveContainer for \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\"" Oct 2 19:59:45.151582 env[1043]: time="2023-10-02T19:59:45.150880505Z" level=info msg="RemoveContainer for \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\"" Oct 2 19:59:45.151582 env[1043]: time="2023-10-02T19:59:45.151059592Z" level=error msg="RemoveContainer for \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\" failed" error="failed to set removing state for container \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\": container is already in removing state" Oct 2 19:59:45.152626 kubelet[1369]: E1002 19:59:45.151321 1369 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\": container is already in removing state" containerID="24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92" Oct 2 19:59:45.152626 kubelet[1369]: E1002 19:59:45.151377 1369 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92": container is already in removing state; Skipping pod "cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd)" Oct 2 19:59:45.152626 kubelet[1369]: E1002 19:59:45.152036 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd)\"" pod="kube-system/cilium-kdc57" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd Oct 2 19:59:45.161581 env[1043]: time="2023-10-02T19:59:45.161511521Z" level=info msg="RemoveContainer for \"24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92\" returns successfully" Oct 2 19:59:45.373714 kubelet[1369]: E1002 19:59:45.373627 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:46.155502 kubelet[1369]: E1002 19:59:46.155385 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd)\"" pod="kube-system/cilium-kdc57" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd Oct 2 19:59:46.300315 kubelet[1369]: E1002 19:59:46.300265 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:46.374617 kubelet[1369]: E1002 19:59:46.374566 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.375680 kubelet[1369]: E1002 19:59:47.375621 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.878223 kubelet[1369]: W1002 19:59:47.878116 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cd12143_3890_47c3_bd14_1e561c5d32bd.slice/cri-containerd-24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92.scope WatchSource:0}: container "24c87d6d1c4e74b70e987c0a7c39c78039a43ca06ab324ec408bc4c255544b92" in namespace "k8s.io": not found Oct 2 19:59:48.377067 kubelet[1369]: E1002 19:59:48.376970 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:49.378628 kubelet[1369]: E1002 19:59:49.378582 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:50.379959 kubelet[1369]: E1002 19:59:50.379892 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:50.990755 kubelet[1369]: W1002 19:59:50.990661 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cd12143_3890_47c3_bd14_1e561c5d32bd.slice/cri-containerd-59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b.scope WatchSource:0}: task 59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b not found: not found Oct 2 19:59:51.111549 kubelet[1369]: E1002 19:59:51.111483 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:51.302361 kubelet[1369]: E1002 19:59:51.302224 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:51.380480 kubelet[1369]: E1002 19:59:51.380432 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:52.382144 kubelet[1369]: E1002 19:59:52.382060 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:53.383669 kubelet[1369]: E1002 19:59:53.383615 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:54.385148 kubelet[1369]: E1002 19:59:54.385054 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:55.385349 kubelet[1369]: E1002 19:59:55.385278 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:56.304318 kubelet[1369]: E1002 19:59:56.304269 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:56.387094 kubelet[1369]: E1002 19:59:56.387007 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:57.387784 kubelet[1369]: E1002 19:59:57.387684 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:58.388677 kubelet[1369]: E1002 19:59:58.388541 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:59.388968 kubelet[1369]: E1002 19:59:59.388878 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:00.389774 kubelet[1369]: E1002 20:00:00.389695 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:00.396814 env[1043]: time="2023-10-02T20:00:00.396721177Z" level=info msg="CreateContainer within sandbox \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:00:00.419864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640669527.mount: Deactivated successfully. Oct 2 20:00:00.435081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1566991476.mount: Deactivated successfully. Oct 2 20:00:00.440257 env[1043]: time="2023-10-02T20:00:00.440181278Z" level=info msg="CreateContainer within sandbox \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\"" Oct 2 20:00:00.442395 env[1043]: time="2023-10-02T20:00:00.441648880Z" level=info msg="StartContainer for \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\"" Oct 2 20:00:00.489375 systemd[1]: Started cri-containerd-d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e.scope. Oct 2 20:00:00.507055 systemd[1]: cri-containerd-d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e.scope: Deactivated successfully. Oct 2 20:00:00.531028 env[1043]: time="2023-10-02T20:00:00.530974960Z" level=info msg="shim disconnected" id=d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e Oct 2 20:00:00.531247 env[1043]: time="2023-10-02T20:00:00.531226824Z" level=warning msg="cleaning up after shim disconnected" id=d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e namespace=k8s.io Oct 2 20:00:00.531340 env[1043]: time="2023-10-02T20:00:00.531324828Z" level=info msg="cleaning up dead shim" Oct 2 20:00:00.545801 env[1043]: time="2023-10-02T20:00:00.545712876Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2334 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:00.546250 env[1043]: time="2023-10-02T20:00:00.546144989Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:00:00.546577 env[1043]: time="2023-10-02T20:00:00.546502472Z" level=error msg="Failed to pipe stdout of container \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\"" error="reading from a closed fifo" Oct 2 20:00:00.547294 env[1043]: time="2023-10-02T20:00:00.547201678Z" level=error msg="Failed to pipe stderr of container \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\"" error="reading from a closed fifo" Oct 2 20:00:00.552778 env[1043]: time="2023-10-02T20:00:00.552703703Z" level=error msg="StartContainer for \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:00.553439 kubelet[1369]: E1002 20:00:00.553008 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e" Oct 2 20:00:00.553439 kubelet[1369]: E1002 20:00:00.553101 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:00.553439 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:00.553439 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 20:00:00.553599 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-49pm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:00.553669 kubelet[1369]: E1002 20:00:00.553139 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kdc57" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd Oct 2 20:00:01.201274 kubelet[1369]: I1002 20:00:01.201192 1369 scope.go:115] "RemoveContainer" containerID="59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b" Oct 2 20:00:01.202000 kubelet[1369]: I1002 20:00:01.201945 1369 scope.go:115] "RemoveContainer" containerID="59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b" Oct 2 20:00:01.205136 env[1043]: time="2023-10-02T20:00:01.205035405Z" level=info msg="RemoveContainer for \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\"" Oct 2 20:00:01.206570 env[1043]: time="2023-10-02T20:00:01.206391557Z" level=info msg="RemoveContainer for \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\"" Oct 2 20:00:01.206931 env[1043]: time="2023-10-02T20:00:01.206732028Z" level=error msg="RemoveContainer for \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\" failed" error="failed to set removing state for container \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\": container is already in removing state" Oct 2 20:00:01.207595 kubelet[1369]: E1002 20:00:01.207168 1369 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\": container is already in removing state" containerID="59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b" Oct 2 20:00:01.207595 kubelet[1369]: E1002 20:00:01.207251 1369 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b": container is already in removing state; Skipping pod "cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd)" Oct 2 20:00:01.212485 kubelet[1369]: E1002 20:00:01.207988 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd)\"" pod="kube-system/cilium-kdc57" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd Oct 2 20:00:01.216033 env[1043]: time="2023-10-02T20:00:01.215845658Z" level=info msg="RemoveContainer for \"59553a1bad638497bc51b7883f428f123ea62f4f3791f11e5032475088bd375b\" returns successfully" Oct 2 20:00:01.305763 kubelet[1369]: E1002 20:00:01.305719 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:01.390104 kubelet[1369]: E1002 20:00:01.390020 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:01.411260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e-rootfs.mount: Deactivated successfully. Oct 2 20:00:02.390361 kubelet[1369]: E1002 20:00:02.390290 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:03.392075 kubelet[1369]: E1002 20:00:03.392003 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:03.639516 kubelet[1369]: W1002 20:00:03.639362 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cd12143_3890_47c3_bd14_1e561c5d32bd.slice/cri-containerd-d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e.scope WatchSource:0}: task d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e not found: not found Oct 2 20:00:04.394549 kubelet[1369]: E1002 20:00:04.394444 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:05.394867 kubelet[1369]: E1002 20:00:05.394753 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:06.307830 kubelet[1369]: E1002 20:00:06.307755 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:06.395456 kubelet[1369]: E1002 20:00:06.395220 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.395742 kubelet[1369]: E1002 20:00:07.395663 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:08.396546 kubelet[1369]: E1002 20:00:08.396478 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:09.397635 kubelet[1369]: E1002 20:00:09.397557 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:10.398520 kubelet[1369]: E1002 20:00:10.398472 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:11.111046 kubelet[1369]: E1002 20:00:11.110916 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:11.310765 kubelet[1369]: E1002 20:00:11.310618 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:11.400178 kubelet[1369]: E1002 20:00:11.399569 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:12.401584 kubelet[1369]: E1002 20:00:12.401508 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:13.402692 kubelet[1369]: E1002 20:00:13.402622 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:14.392110 kubelet[1369]: E1002 20:00:14.392018 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd)\"" pod="kube-system/cilium-kdc57" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd Oct 2 20:00:14.403427 kubelet[1369]: E1002 20:00:14.403305 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:15.407040 kubelet[1369]: E1002 20:00:15.406972 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:16.312239 kubelet[1369]: E1002 20:00:16.312091 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:16.408054 kubelet[1369]: E1002 20:00:16.407897 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:17.409222 kubelet[1369]: E1002 20:00:17.409150 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:18.409874 kubelet[1369]: E1002 20:00:18.409775 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:19.410335 kubelet[1369]: E1002 20:00:19.410231 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:20.412490 kubelet[1369]: E1002 20:00:20.411823 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:21.313538 kubelet[1369]: E1002 20:00:21.313489 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:21.414625 kubelet[1369]: E1002 20:00:21.414562 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:22.415531 kubelet[1369]: E1002 20:00:22.415441 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:23.416577 kubelet[1369]: E1002 20:00:23.416527 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:24.417615 kubelet[1369]: E1002 20:00:24.417549 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:25.418846 kubelet[1369]: E1002 20:00:25.418801 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:26.315114 kubelet[1369]: E1002 20:00:26.315039 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:26.420611 kubelet[1369]: E1002 20:00:26.420453 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:27.421397 kubelet[1369]: E1002 20:00:27.421307 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:28.395925 env[1043]: time="2023-10-02T20:00:28.395559295Z" level=info msg="CreateContainer within sandbox \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:00:28.426005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70797278.mount: Deactivated successfully. Oct 2 20:00:28.427505 kubelet[1369]: E1002 20:00:28.427456 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:28.436721 env[1043]: time="2023-10-02T20:00:28.436638387Z" level=info msg="CreateContainer within sandbox \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43\"" Oct 2 20:00:28.438629 env[1043]: time="2023-10-02T20:00:28.438554017Z" level=info msg="StartContainer for \"3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43\"" Oct 2 20:00:28.490844 systemd[1]: Started cri-containerd-3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43.scope. Oct 2 20:00:28.517296 systemd[1]: cri-containerd-3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43.scope: Deactivated successfully. Oct 2 20:00:28.533483 env[1043]: time="2023-10-02T20:00:28.533437790Z" level=info msg="shim disconnected" id=3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43 Oct 2 20:00:28.533707 env[1043]: time="2023-10-02T20:00:28.533689363Z" level=warning msg="cleaning up after shim disconnected" id=3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43 namespace=k8s.io Oct 2 20:00:28.533776 env[1043]: time="2023-10-02T20:00:28.533763121Z" level=info msg="cleaning up dead shim" Oct 2 20:00:28.542055 env[1043]: time="2023-10-02T20:00:28.542009131Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2376 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:28.542309 env[1043]: time="2023-10-02T20:00:28.542257968Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed" Oct 2 20:00:28.543683 env[1043]: time="2023-10-02T20:00:28.543642892Z" level=error msg="Failed to pipe stdout of container \"3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43\"" error="reading from a closed fifo" Oct 2 20:00:28.543807 env[1043]: time="2023-10-02T20:00:28.543773246Z" level=error msg="Failed to pipe stderr of container \"3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43\"" error="reading from a closed fifo" Oct 2 20:00:28.547796 env[1043]: time="2023-10-02T20:00:28.547750132Z" level=error msg="StartContainer for \"3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:28.548444 kubelet[1369]: E1002 20:00:28.548048 1369 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43" Oct 2 20:00:28.548444 kubelet[1369]: E1002 20:00:28.548156 1369 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:28.548444 kubelet[1369]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:28.548444 kubelet[1369]: rm /hostbin/cilium-mount Oct 2 20:00:28.548613 kubelet[1369]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-49pm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:28.548687 kubelet[1369]: E1002 20:00:28.548199 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kdc57" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd Oct 2 20:00:29.302817 kubelet[1369]: I1002 20:00:29.301775 1369 scope.go:115] "RemoveContainer" containerID="d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e" Oct 2 20:00:29.302817 kubelet[1369]: I1002 20:00:29.302500 1369 scope.go:115] "RemoveContainer" containerID="d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e" Oct 2 20:00:29.306259 env[1043]: time="2023-10-02T20:00:29.306171263Z" level=info msg="RemoveContainer for \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\"" Oct 2 20:00:29.310876 env[1043]: time="2023-10-02T20:00:29.310763273Z" level=info msg="RemoveContainer for \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\"" Oct 2 20:00:29.311143 env[1043]: time="2023-10-02T20:00:29.311029073Z" level=error msg="RemoveContainer for \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\" failed" error="failed to set removing state for container \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\": container is already in removing state" Oct 2 20:00:29.311630 kubelet[1369]: E1002 20:00:29.311398 1369 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\": container is already in removing state" containerID="d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e" Oct 2 20:00:29.311630 kubelet[1369]: E1002 20:00:29.311542 1369 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e": container is already in removing state; Skipping pod "cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd)" Oct 2 20:00:29.313154 kubelet[1369]: E1002 20:00:29.313081 1369 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-kdc57_kube-system(6cd12143-3890-47c3-bd14-1e561c5d32bd)\"" pod="kube-system/cilium-kdc57" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd Oct 2 20:00:29.317510 env[1043]: time="2023-10-02T20:00:29.317397452Z" level=info msg="RemoveContainer for \"d339c95cef8eabf1b6d264ea6ff3ae4b547a0768c07595b10bb37cbf1431142e\" returns successfully" Oct 2 20:00:29.414965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43-rootfs.mount: Deactivated successfully. Oct 2 20:00:29.428079 kubelet[1369]: E1002 20:00:29.427999 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:30.428758 kubelet[1369]: E1002 20:00:30.428655 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:31.111870 kubelet[1369]: E1002 20:00:31.111791 1369 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:31.144774 env[1043]: time="2023-10-02T20:00:31.144671225Z" level=info msg="StopPodSandbox for \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\"" Oct 2 20:00:31.145519 env[1043]: time="2023-10-02T20:00:31.144839051Z" level=info msg="TearDown network for sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" successfully" Oct 2 20:00:31.145519 env[1043]: time="2023-10-02T20:00:31.144913691Z" level=info msg="StopPodSandbox for \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" returns successfully" Oct 2 20:00:31.145870 env[1043]: time="2023-10-02T20:00:31.145792252Z" level=info msg="RemovePodSandbox for \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\"" Oct 2 20:00:31.145976 env[1043]: time="2023-10-02T20:00:31.145863196Z" level=info msg="Forcibly stopping sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\"" Oct 2 20:00:31.146078 env[1043]: time="2023-10-02T20:00:31.145999331Z" level=info msg="TearDown network for sandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" successfully" Oct 2 20:00:31.152234 env[1043]: time="2023-10-02T20:00:31.152043771Z" level=info msg="RemovePodSandbox \"c3fc887f75113cfe5e5df18542acca22959181316d29352df052b453c0056574\" returns successfully" Oct 2 20:00:31.153064 env[1043]: time="2023-10-02T20:00:31.152983958Z" level=info msg="StopPodSandbox for \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\"" Oct 2 20:00:31.153560 env[1043]: time="2023-10-02T20:00:31.153453300Z" level=info msg="TearDown network for sandbox \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\" successfully" Oct 2 20:00:31.153770 env[1043]: time="2023-10-02T20:00:31.153722786Z" level=info msg="StopPodSandbox for \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\" returns successfully" Oct 2 20:00:31.154647 env[1043]: time="2023-10-02T20:00:31.154580669Z" level=info msg="RemovePodSandbox for \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\"" Oct 2 20:00:31.154824 env[1043]: time="2023-10-02T20:00:31.154651512Z" level=info msg="Forcibly stopping sandbox \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\"" Oct 2 20:00:31.154979 env[1043]: time="2023-10-02T20:00:31.154811353Z" level=info msg="TearDown network for sandbox \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\" successfully" Oct 2 20:00:31.159831 env[1043]: time="2023-10-02T20:00:31.159655666Z" level=info msg="RemovePodSandbox \"bb6616defcd85810e0411e86792824fa736d1d652b07591cdc3ebd8869529e5a\" returns successfully" Oct 2 20:00:31.316073 kubelet[1369]: E1002 20:00:31.316029 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:31.429151 kubelet[1369]: E1002 20:00:31.428981 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:31.642183 kubelet[1369]: W1002 20:00:31.642088 1369 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cd12143_3890_47c3_bd14_1e561c5d32bd.slice/cri-containerd-3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43.scope WatchSource:0}: task 3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43 not found: not found Oct 2 20:00:32.430282 kubelet[1369]: E1002 20:00:32.430224 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:33.431780 kubelet[1369]: E1002 20:00:33.431601 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:34.432365 kubelet[1369]: E1002 20:00:34.432306 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:35.434207 kubelet[1369]: E1002 20:00:35.434137 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:36.318752 kubelet[1369]: E1002 20:00:36.318709 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:36.435567 kubelet[1369]: E1002 20:00:36.435392 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:37.436459 kubelet[1369]: E1002 20:00:37.436290 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:38.437002 kubelet[1369]: E1002 20:00:38.436932 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:39.075788 env[1043]: time="2023-10-02T20:00:39.075684608Z" level=info msg="StopPodSandbox for \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\"" Oct 2 20:00:39.080553 env[1043]: time="2023-10-02T20:00:39.075835823Z" level=info msg="Container to stop \"3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:39.079573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d-shm.mount: Deactivated successfully. Oct 2 20:00:39.095677 systemd[1]: cri-containerd-e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d.scope: Deactivated successfully. Oct 2 20:00:39.101298 kernel: kauditd_printk_skb: 223 callbacks suppressed Oct 2 20:00:39.101605 kernel: audit: type=1334 audit(1696276839.095:737): prog-id=87 op=UNLOAD Oct 2 20:00:39.095000 audit: BPF prog-id=87 op=UNLOAD Oct 2 20:00:39.103000 audit: BPF prog-id=90 op=UNLOAD Oct 2 20:00:39.112222 kernel: audit: type=1334 audit(1696276839.103:738): prog-id=90 op=UNLOAD Oct 2 20:00:39.123862 env[1043]: time="2023-10-02T20:00:39.123777982Z" level=info msg="StopContainer for \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\" with timeout 30 (s)" Oct 2 20:00:39.125184 env[1043]: time="2023-10-02T20:00:39.125105215Z" level=info msg="Stop container \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\" with signal terminated" Oct 2 20:00:39.151000 audit: BPF prog-id=83 op=UNLOAD Oct 2 20:00:39.150947 systemd[1]: cri-containerd-219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952.scope: Deactivated successfully. Oct 2 20:00:39.158051 kernel: audit: type=1334 audit(1696276839.151:739): prog-id=83 op=UNLOAD Oct 2 20:00:39.158695 kernel: audit: type=1334 audit(1696276839.155:740): prog-id=86 op=UNLOAD Oct 2 20:00:39.155000 audit: BPF prog-id=86 op=UNLOAD Oct 2 20:00:39.188890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d-rootfs.mount: Deactivated successfully. Oct 2 20:00:39.199443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952-rootfs.mount: Deactivated successfully. Oct 2 20:00:39.201290 env[1043]: time="2023-10-02T20:00:39.201240777Z" level=info msg="shim disconnected" id=219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952 Oct 2 20:00:39.201375 env[1043]: time="2023-10-02T20:00:39.201292534Z" level=warning msg="cleaning up after shim disconnected" id=219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952 namespace=k8s.io Oct 2 20:00:39.201375 env[1043]: time="2023-10-02T20:00:39.201307111Z" level=info msg="cleaning up dead shim" Oct 2 20:00:39.201734 env[1043]: time="2023-10-02T20:00:39.201680332Z" level=info msg="shim disconnected" id=e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d Oct 2 20:00:39.201783 env[1043]: time="2023-10-02T20:00:39.201734074Z" level=warning msg="cleaning up after shim disconnected" id=e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d namespace=k8s.io Oct 2 20:00:39.201783 env[1043]: time="2023-10-02T20:00:39.201744965Z" level=info msg="cleaning up dead shim" Oct 2 20:00:39.215867 env[1043]: time="2023-10-02T20:00:39.215802718Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2432 runtime=io.containerd.runc.v2\n" Oct 2 20:00:39.217211 env[1043]: time="2023-10-02T20:00:39.217164726Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2433 runtime=io.containerd.runc.v2\n" Oct 2 20:00:39.218348 env[1043]: time="2023-10-02T20:00:39.218305900Z" level=info msg="TearDown network for sandbox \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" successfully" Oct 2 20:00:39.218348 env[1043]: time="2023-10-02T20:00:39.218338421Z" level=info msg="StopPodSandbox for \"e9098c48d7bce9dcce1affb766ec1171509e36cd8aa700f8831f2144fd9c121d\" returns successfully" Oct 2 20:00:39.220341 env[1043]: time="2023-10-02T20:00:39.219827680Z" level=info msg="StopContainer for \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\" returns successfully" Oct 2 20:00:39.223275 env[1043]: time="2023-10-02T20:00:39.221084091Z" level=info msg="StopPodSandbox for \"040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2\"" Oct 2 20:00:39.223275 env[1043]: time="2023-10-02T20:00:39.221139936Z" level=info msg="Container to stop \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:39.222776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2-shm.mount: Deactivated successfully. Oct 2 20:00:39.231000 audit: BPF prog-id=75 op=UNLOAD Oct 2 20:00:39.231329 systemd[1]: cri-containerd-040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2.scope: Deactivated successfully. Oct 2 20:00:39.233422 kernel: audit: type=1334 audit(1696276839.231:741): prog-id=75 op=UNLOAD Oct 2 20:00:39.236000 audit: BPF prog-id=78 op=UNLOAD Oct 2 20:00:39.238431 kernel: audit: type=1334 audit(1696276839.236:742): prog-id=78 op=UNLOAD Oct 2 20:00:39.253978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2-rootfs.mount: Deactivated successfully. Oct 2 20:00:39.267884 env[1043]: time="2023-10-02T20:00:39.267809994Z" level=info msg="shim disconnected" id=040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2 Oct 2 20:00:39.268212 env[1043]: time="2023-10-02T20:00:39.268177175Z" level=warning msg="cleaning up after shim disconnected" id=040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2 namespace=k8s.io Oct 2 20:00:39.268334 env[1043]: time="2023-10-02T20:00:39.268307870Z" level=info msg="cleaning up dead shim" Oct 2 20:00:39.276824 env[1043]: time="2023-10-02T20:00:39.276773446Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2476 runtime=io.containerd.runc.v2\n" Oct 2 20:00:39.277459 env[1043]: time="2023-10-02T20:00:39.277393751Z" level=info msg="TearDown network for sandbox \"040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2\" successfully" Oct 2 20:00:39.277593 env[1043]: time="2023-10-02T20:00:39.277565474Z" level=info msg="StopPodSandbox for \"040c0dc6b3842dea6c482888f876de4115257b8481a7041d30bfecc19211bde2\" returns successfully" Oct 2 20:00:39.336489 kubelet[1369]: I1002 20:00:39.335208 1369 scope.go:115] "RemoveContainer" containerID="219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952" Oct 2 20:00:39.348054 env[1043]: time="2023-10-02T20:00:39.347959628Z" level=info msg="RemoveContainer for \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\"" Oct 2 20:00:39.350247 kubelet[1369]: I1002 20:00:39.350192 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-lib-modules\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.350524 kubelet[1369]: I1002 20:00:39.350503 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-xtables-lock\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.350716 kubelet[1369]: I1002 20:00:39.350696 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-bpf-maps\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.350918 kubelet[1369]: I1002 20:00:39.350889 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-run\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.351077 kubelet[1369]: I1002 20:00:39.351059 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-etc-cni-netd\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.351249 kubelet[1369]: I1002 20:00:39.351231 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cd12143-3890-47c3-bd14-1e561c5d32bd-hubble-tls\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.351461 kubelet[1369]: I1002 20:00:39.351442 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-cgroup\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.351680 kubelet[1369]: I1002 20:00:39.351661 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-config-path\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.351872 kubelet[1369]: I1002 20:00:39.351851 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-ipsec-secrets\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.352072 kubelet[1369]: I1002 20:00:39.352050 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cni-path\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.352248 kubelet[1369]: I1002 20:00:39.352230 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cd12143-3890-47c3-bd14-1e561c5d32bd-clustermesh-secrets\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.352474 kubelet[1369]: I1002 20:00:39.352449 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49pm5\" (UniqueName: \"kubernetes.io/projected/6cd12143-3890-47c3-bd14-1e561c5d32bd-kube-api-access-49pm5\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.352710 kubelet[1369]: I1002 20:00:39.352689 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-host-proc-sys-net\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.352875 kubelet[1369]: I1002 20:00:39.352857 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-hostproc\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.353072 kubelet[1369]: I1002 20:00:39.353053 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-host-proc-sys-kernel\") pod \"6cd12143-3890-47c3-bd14-1e561c5d32bd\" (UID: \"6cd12143-3890-47c3-bd14-1e561c5d32bd\") " Oct 2 20:00:39.353250 kubelet[1369]: I1002 20:00:39.353232 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8hf4\" (UniqueName: \"kubernetes.io/projected/efd1f3b2-c632-4c1d-b5f1-bf0291649db3-kube-api-access-x8hf4\") pod \"efd1f3b2-c632-4c1d-b5f1-bf0291649db3\" (UID: \"efd1f3b2-c632-4c1d-b5f1-bf0291649db3\") " Oct 2 20:00:39.353454 kubelet[1369]: I1002 20:00:39.353432 1369 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efd1f3b2-c632-4c1d-b5f1-bf0291649db3-cilium-config-path\") pod \"efd1f3b2-c632-4c1d-b5f1-bf0291649db3\" (UID: \"efd1f3b2-c632-4c1d-b5f1-bf0291649db3\") " Oct 2 20:00:39.355474 kubelet[1369]: W1002 20:00:39.355447 1369 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/efd1f3b2-c632-4c1d-b5f1-bf0291649db3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:39.357799 env[1043]: time="2023-10-02T20:00:39.357722391Z" level=info msg="RemoveContainer for \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\" returns successfully" Oct 2 20:00:39.359501 kubelet[1369]: I1002 20:00:39.359473 1369 scope.go:115] "RemoveContainer" containerID="219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952" Oct 2 20:00:39.359766 kubelet[1369]: I1002 20:00:39.359724 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.359926 kubelet[1369]: I1002 20:00:39.359899 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.360137 kubelet[1369]: I1002 20:00:39.360106 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.360303 kubelet[1369]: I1002 20:00:39.360274 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.374930 kubelet[1369]: I1002 20:00:39.361345 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cni-path" (OuterVolumeSpecName: "cni-path") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.374930 kubelet[1369]: I1002 20:00:39.366076 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd12143-3890-47c3-bd14-1e561c5d32bd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:39.374930 kubelet[1369]: I1002 20:00:39.366838 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd12143-3890-47c3-bd14-1e561c5d32bd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:39.374930 kubelet[1369]: I1002 20:00:39.366906 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.374930 kubelet[1369]: I1002 20:00:39.369858 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd12143-3890-47c3-bd14-1e561c5d32bd-kube-api-access-49pm5" (OuterVolumeSpecName: "kube-api-access-49pm5") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "kube-api-access-49pm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:39.375348 kubelet[1369]: I1002 20:00:39.369908 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.375348 kubelet[1369]: I1002 20:00:39.369938 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-hostproc" (OuterVolumeSpecName: "hostproc") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.375348 kubelet[1369]: I1002 20:00:39.369974 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.375348 kubelet[1369]: I1002 20:00:39.372535 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:39.375348 kubelet[1369]: W1002 20:00:39.373075 1369 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/6cd12143-3890-47c3-bd14-1e561c5d32bd/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:39.376863 env[1043]: time="2023-10-02T20:00:39.376533585Z" level=error msg="ContainerStatus for \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\": not found" Oct 2 20:00:39.377206 kubelet[1369]: E1002 20:00:39.377176 1369 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\": not found" containerID="219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952" Oct 2 20:00:39.377397 kubelet[1369]: I1002 20:00:39.377375 1369 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952} err="failed to get container status \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\": rpc error: code = NotFound desc = an error occurred when try to find container \"219b406b381738e9e22e34fce12528b839df03bbc3b4ea8abbe84ca6ffa9b952\": not found" Oct 2 20:00:39.377698 kubelet[1369]: I1002 20:00:39.377672 1369 scope.go:115] "RemoveContainer" containerID="3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43" Oct 2 20:00:39.380253 kubelet[1369]: I1002 20:00:39.380202 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:39.380397 kubelet[1369]: I1002 20:00:39.373758 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd1f3b2-c632-4c1d-b5f1-bf0291649db3-kube-api-access-x8hf4" (OuterVolumeSpecName: "kube-api-access-x8hf4") pod "efd1f3b2-c632-4c1d-b5f1-bf0291649db3" (UID: "efd1f3b2-c632-4c1d-b5f1-bf0291649db3"). InnerVolumeSpecName "kube-api-access-x8hf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:39.380615 kubelet[1369]: I1002 20:00:39.374804 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd1f3b2-c632-4c1d-b5f1-bf0291649db3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "efd1f3b2-c632-4c1d-b5f1-bf0291649db3" (UID: "efd1f3b2-c632-4c1d-b5f1-bf0291649db3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:39.381238 env[1043]: time="2023-10-02T20:00:39.381175327Z" level=info msg="RemoveContainer for \"3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43\"" Oct 2 20:00:39.385909 env[1043]: time="2023-10-02T20:00:39.385822489Z" level=info msg="RemoveContainer for \"3a3df35da3ccf17ee5d23098bf24fdf9d4f6c4d91e5b26d99b8e089e0d4ebc43\" returns successfully" Oct 2 20:00:39.386239 kubelet[1369]: I1002 20:00:39.386116 1369 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6cd12143-3890-47c3-bd14-1e561c5d32bd" (UID: "6cd12143-3890-47c3-bd14-1e561c5d32bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:39.398981 systemd[1]: Removed slice kubepods-burstable-pod6cd12143_3890_47c3_bd14_1e561c5d32bd.slice. Oct 2 20:00:39.403756 systemd[1]: Removed slice kubepods-besteffort-podefd1f3b2_c632_4c1d_b5f1_bf0291649db3.slice. Oct 2 20:00:39.437820 kubelet[1369]: E1002 20:00:39.437765 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:39.454209 kubelet[1369]: I1002 20:00:39.454122 1369 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6cd12143-3890-47c3-bd14-1e561c5d32bd-clustermesh-secrets\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.454647 kubelet[1369]: I1002 20:00:39.454592 1369 reconciler.go:399] "Volume detached for volume \"kube-api-access-49pm5\" (UniqueName: \"kubernetes.io/projected/6cd12143-3890-47c3-bd14-1e561c5d32bd-kube-api-access-49pm5\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.454888 kubelet[1369]: I1002 20:00:39.454835 1369 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-host-proc-sys-net\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.455086 kubelet[1369]: I1002 20:00:39.455063 1369 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-hostproc\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.455305 kubelet[1369]: I1002 20:00:39.455282 1369 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-host-proc-sys-kernel\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.455665 kubelet[1369]: I1002 20:00:39.455613 1369 reconciler.go:399] "Volume detached for volume \"kube-api-access-x8hf4\" (UniqueName: \"kubernetes.io/projected/efd1f3b2-c632-4c1d-b5f1-bf0291649db3-kube-api-access-x8hf4\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.455886 kubelet[1369]: I1002 20:00:39.455840 1369 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efd1f3b2-c632-4c1d-b5f1-bf0291649db3-cilium-config-path\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.456108 kubelet[1369]: I1002 20:00:39.456084 1369 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-lib-modules\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.456322 kubelet[1369]: I1002 20:00:39.456300 1369 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-xtables-lock\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.456580 kubelet[1369]: I1002 20:00:39.456556 1369 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-bpf-maps\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.456802 kubelet[1369]: I1002 20:00:39.456778 1369 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-run\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.457018 kubelet[1369]: I1002 20:00:39.456995 1369 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-etc-cni-netd\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.457244 kubelet[1369]: I1002 20:00:39.457221 1369 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6cd12143-3890-47c3-bd14-1e561c5d32bd-hubble-tls\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.457506 kubelet[1369]: I1002 20:00:39.457483 1369 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-cgroup\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.457738 kubelet[1369]: I1002 20:00:39.457714 1369 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-config-path\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.457966 kubelet[1369]: I1002 20:00:39.457942 1369 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6cd12143-3890-47c3-bd14-1e561c5d32bd-cilium-ipsec-secrets\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:39.458193 kubelet[1369]: I1002 20:00:39.458170 1369 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6cd12143-3890-47c3-bd14-1e561c5d32bd-cni-path\") on node \"172.24.4.32\" DevicePath \"\"" Oct 2 20:00:40.079118 systemd[1]: var-lib-kubelet-pods-6cd12143\x2d3890\x2d47c3\x2dbd14\x2d1e561c5d32bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d49pm5.mount: Deactivated successfully. Oct 2 20:00:40.079361 systemd[1]: var-lib-kubelet-pods-6cd12143\x2d3890\x2d47c3\x2dbd14\x2d1e561c5d32bd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:00:40.079556 systemd[1]: var-lib-kubelet-pods-6cd12143\x2d3890\x2d47c3\x2dbd14\x2d1e561c5d32bd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:40.079703 systemd[1]: var-lib-kubelet-pods-6cd12143\x2d3890\x2d47c3\x2dbd14\x2d1e561c5d32bd-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:40.079849 systemd[1]: var-lib-kubelet-pods-efd1f3b2\x2dc632\x2d4c1d\x2db5f1\x2dbf0291649db3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx8hf4.mount: Deactivated successfully. Oct 2 20:00:40.438332 kubelet[1369]: E1002 20:00:40.438291 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:41.320813 kubelet[1369]: E1002 20:00:41.320731 1369 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:41.395999 kubelet[1369]: I1002 20:00:41.395894 1369 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6cd12143-3890-47c3-bd14-1e561c5d32bd path="/var/lib/kubelet/pods/6cd12143-3890-47c3-bd14-1e561c5d32bd/volumes" Oct 2 20:00:41.396948 kubelet[1369]: I1002 20:00:41.396910 1369 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=efd1f3b2-c632-4c1d-b5f1-bf0291649db3 path="/var/lib/kubelet/pods/efd1f3b2-c632-4c1d-b5f1-bf0291649db3/volumes" Oct 2 20:00:41.439441 kubelet[1369]: E1002 20:00:41.439314 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:42.439956 kubelet[1369]: E1002 20:00:42.439877 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:43.441390 kubelet[1369]: E1002 20:00:43.441328 1369 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"