Oct 2 19:32:37.826540 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:32:37.826563 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:32:37.826571 kernel: BIOS-provided physical RAM map: Oct 2 19:32:37.826577 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 19:32:37.826582 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 19:32:37.826587 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 19:32:37.826594 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Oct 2 19:32:37.826600 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Oct 2 19:32:37.826606 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 2 19:32:37.826612 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 19:32:37.826617 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 2 19:32:37.826623 kernel: NX (Execute Disable) protection: active Oct 2 19:32:37.826628 kernel: SMBIOS 2.8 present. Oct 2 19:32:37.826634 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 2 19:32:37.826642 kernel: Hypervisor detected: KVM Oct 2 19:32:37.826648 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:32:37.826654 kernel: kvm-clock: cpu 0, msr 29f8a001, primary cpu clock Oct 2 19:32:37.826660 kernel: kvm-clock: using sched offset of 4552123366 cycles Oct 2 19:32:37.826666 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:32:37.826672 kernel: tsc: Detected 2794.748 MHz processor Oct 2 19:32:37.826678 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:32:37.826685 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:32:37.826691 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Oct 2 19:32:37.826698 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:32:37.826704 kernel: Using GB pages for direct mapping Oct 2 19:32:37.826710 kernel: ACPI: Early table checksum verification disabled Oct 2 19:32:37.826716 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Oct 2 19:32:37.826721 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:32:37.826728 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:32:37.826734 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:32:37.826739 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 2 19:32:37.826745 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:32:37.826753 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:32:37.826759 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:32:37.826765 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Oct 2 19:32:37.826770 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Oct 2 19:32:37.826776 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 2 19:32:37.826782 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Oct 2 19:32:37.826788 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Oct 2 19:32:37.826794 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Oct 2 19:32:37.826803 kernel: No NUMA configuration found Oct 2 19:32:37.826809 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Oct 2 19:32:37.826816 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Oct 2 19:32:37.826822 kernel: Zone ranges: Oct 2 19:32:37.826829 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:32:37.826835 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Oct 2 19:32:37.826842 kernel: Normal empty Oct 2 19:32:37.826849 kernel: Movable zone start for each node Oct 2 19:32:37.826855 kernel: Early memory node ranges Oct 2 19:32:37.826861 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 19:32:37.826867 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Oct 2 19:32:37.826874 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Oct 2 19:32:37.826880 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:32:37.826886 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 19:32:37.826893 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Oct 2 19:32:37.826900 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 2 19:32:37.826906 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:32:37.826913 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:32:37.826919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 19:32:37.826926 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:32:37.826932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:32:37.826938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:32:37.826945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:32:37.826951 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:32:37.826958 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:32:37.826964 kernel: TSC deadline timer available Oct 2 19:32:37.826971 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 2 19:32:37.826977 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 2 19:32:37.826983 kernel: kvm-guest: setup PV sched yield Oct 2 19:32:37.826990 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Oct 2 19:32:37.826996 kernel: Booting paravirtualized kernel on KVM Oct 2 19:32:37.827002 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:32:37.827009 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 2 19:32:37.827016 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Oct 2 19:32:37.827023 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Oct 2 19:32:37.827029 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 2 19:32:37.827035 kernel: kvm-guest: setup async PF for cpu 0 Oct 2 19:32:37.827041 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Oct 2 19:32:37.827048 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:32:37.827054 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:32:37.827060 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Oct 2 19:32:37.827067 kernel: Policy zone: DMA32 Oct 2 19:32:37.827074 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:32:37.827082 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:32:37.827088 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:32:37.827095 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:32:37.827101 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:32:37.827108 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 132728K reserved, 0K cma-reserved) Oct 2 19:32:37.827114 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:32:37.827121 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:32:37.827128 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:32:37.827135 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:32:37.827142 kernel: rcu: RCU event tracing is enabled. Oct 2 19:32:37.827148 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:32:37.827155 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:32:37.827161 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:32:37.827168 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:32:37.827174 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:32:37.827180 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 2 19:32:37.827188 kernel: random: crng init done Oct 2 19:32:37.827194 kernel: Console: colour VGA+ 80x25 Oct 2 19:32:37.827201 kernel: printk: console [ttyS0] enabled Oct 2 19:32:37.827207 kernel: ACPI: Core revision 20210730 Oct 2 19:32:37.827213 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 2 19:32:37.827220 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:32:37.827226 kernel: x2apic enabled Oct 2 19:32:37.827232 kernel: Switched APIC routing to physical x2apic. Oct 2 19:32:37.827239 kernel: kvm-guest: setup PV IPIs Oct 2 19:32:37.827245 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 19:32:37.827252 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 19:32:37.827259 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 2 19:32:37.827265 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 2 19:32:37.827272 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 2 19:32:37.827278 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 2 19:32:37.827284 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:32:37.827291 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:32:37.827297 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:32:37.827305 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:32:37.827318 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 2 19:32:37.827325 kernel: RETBleed: Mitigation: untrained return thunk Oct 2 19:32:37.827332 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:32:37.827339 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:32:37.827346 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:32:37.827353 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:32:37.827360 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:32:37.827368 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:32:37.827375 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:32:37.827383 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:32:37.827390 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:32:37.827396 kernel: LSM: Security Framework initializing Oct 2 19:32:37.827403 kernel: SELinux: Initializing. Oct 2 19:32:37.827409 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:32:37.827418 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:32:37.827425 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 2 19:32:37.827433 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 2 19:32:37.827439 kernel: ... version: 0 Oct 2 19:32:37.827446 kernel: ... bit width: 48 Oct 2 19:32:37.827453 kernel: ... generic registers: 6 Oct 2 19:32:37.827459 kernel: ... value mask: 0000ffffffffffff Oct 2 19:32:37.827467 kernel: ... max period: 00007fffffffffff Oct 2 19:32:37.827474 kernel: ... fixed-purpose events: 0 Oct 2 19:32:37.827489 kernel: ... event mask: 000000000000003f Oct 2 19:32:37.827496 kernel: signal: max sigframe size: 1776 Oct 2 19:32:37.827504 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:32:37.827602 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:32:37.827611 kernel: x86: Booting SMP configuration: Oct 2 19:32:37.827618 kernel: .... node #0, CPUs: #1 Oct 2 19:32:37.827625 kernel: kvm-clock: cpu 1, msr 29f8a041, secondary cpu clock Oct 2 19:32:37.827631 kernel: kvm-guest: setup async PF for cpu 1 Oct 2 19:32:37.827638 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Oct 2 19:32:37.827645 kernel: #2 Oct 2 19:32:37.827652 kernel: kvm-clock: cpu 2, msr 29f8a081, secondary cpu clock Oct 2 19:32:37.827658 kernel: kvm-guest: setup async PF for cpu 2 Oct 2 19:32:37.827667 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Oct 2 19:32:37.827673 kernel: #3 Oct 2 19:32:37.827680 kernel: kvm-clock: cpu 3, msr 29f8a0c1, secondary cpu clock Oct 2 19:32:37.827686 kernel: kvm-guest: setup async PF for cpu 3 Oct 2 19:32:37.827693 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Oct 2 19:32:37.827699 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:32:37.827706 kernel: smpboot: Max logical packages: 1 Oct 2 19:32:37.827713 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 2 19:32:37.827719 kernel: devtmpfs: initialized Oct 2 19:32:37.827727 kernel: x86/mm: Memory block size: 128MB Oct 2 19:32:37.827734 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:32:37.827741 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:32:37.827748 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:32:37.827754 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:32:37.827761 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:32:37.827768 kernel: audit: type=2000 audit(1696275157.000:1): state=initialized audit_enabled=0 res=1 Oct 2 19:32:37.827774 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:32:37.827781 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:32:37.827789 kernel: cpuidle: using governor menu Oct 2 19:32:37.827795 kernel: ACPI: bus type PCI registered Oct 2 19:32:37.827802 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:32:37.827809 kernel: dca service started, version 1.12.1 Oct 2 19:32:37.827815 kernel: PCI: Using configuration type 1 for base access Oct 2 19:32:37.827822 kernel: PCI: Using configuration type 1 for extended access Oct 2 19:32:37.827829 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:32:37.827835 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:32:37.827842 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:32:37.827850 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:32:37.827857 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:32:37.827864 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:32:37.827870 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:32:37.827877 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:32:37.827883 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:32:37.827890 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:32:37.827897 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:32:37.827903 kernel: ACPI: Interpreter enabled Oct 2 19:32:37.827911 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:32:37.827918 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:32:37.827924 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:32:37.827931 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 19:32:37.827938 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:32:37.828054 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:32:37.828066 kernel: acpiphp: Slot [3] registered Oct 2 19:32:37.828073 kernel: acpiphp: Slot [4] registered Oct 2 19:32:37.828081 kernel: acpiphp: Slot [5] registered Oct 2 19:32:37.828087 kernel: acpiphp: Slot [6] registered Oct 2 19:32:37.828094 kernel: acpiphp: Slot [7] registered Oct 2 19:32:37.828101 kernel: acpiphp: Slot [8] registered Oct 2 19:32:37.828109 kernel: acpiphp: Slot [9] registered Oct 2 19:32:37.828118 kernel: acpiphp: Slot [10] registered Oct 2 19:32:37.828126 kernel: acpiphp: Slot [11] registered Oct 2 19:32:37.828134 kernel: acpiphp: Slot [12] registered Oct 2 19:32:37.828141 kernel: acpiphp: Slot [13] registered Oct 2 19:32:37.828147 kernel: acpiphp: Slot [14] registered Oct 2 19:32:37.828155 kernel: acpiphp: Slot [15] registered Oct 2 19:32:37.828162 kernel: acpiphp: Slot [16] registered Oct 2 19:32:37.828168 kernel: acpiphp: Slot [17] registered Oct 2 19:32:37.828177 kernel: acpiphp: Slot [18] registered Oct 2 19:32:37.828185 kernel: acpiphp: Slot [19] registered Oct 2 19:32:37.828192 kernel: acpiphp: Slot [20] registered Oct 2 19:32:37.828199 kernel: acpiphp: Slot [21] registered Oct 2 19:32:37.828205 kernel: acpiphp: Slot [22] registered Oct 2 19:32:37.828212 kernel: acpiphp: Slot [23] registered Oct 2 19:32:37.828220 kernel: acpiphp: Slot [24] registered Oct 2 19:32:37.828226 kernel: acpiphp: Slot [25] registered Oct 2 19:32:37.828233 kernel: acpiphp: Slot [26] registered Oct 2 19:32:37.828240 kernel: acpiphp: Slot [27] registered Oct 2 19:32:37.828246 kernel: acpiphp: Slot [28] registered Oct 2 19:32:37.828253 kernel: acpiphp: Slot [29] registered Oct 2 19:32:37.828260 kernel: acpiphp: Slot [30] registered Oct 2 19:32:37.828266 kernel: acpiphp: Slot [31] registered Oct 2 19:32:37.828273 kernel: PCI host bridge to bus 0000:00 Oct 2 19:32:37.828365 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:32:37.828429 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:32:37.828553 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:32:37.828626 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Oct 2 19:32:37.828808 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 2 19:32:37.828866 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:32:37.828953 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:32:37.829035 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:32:37.829122 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 19:32:37.829207 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Oct 2 19:32:37.829288 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 19:32:37.829367 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 19:32:37.829436 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 19:32:37.829532 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 19:32:37.829614 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:32:37.829683 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 2 19:32:37.829749 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 2 19:32:37.829819 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 31250 usecs Oct 2 19:32:37.829892 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Oct 2 19:32:37.829957 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 2 19:32:37.830026 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 2 19:32:37.830102 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 2 19:32:37.830166 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:32:37.830240 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:32:37.830309 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:32:37.830380 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 2 19:32:37.830447 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 2 19:32:37.830548 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 19:32:37.830618 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 19:32:37.830685 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 2 19:32:37.830751 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 2 19:32:37.830826 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:32:37.830893 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Oct 2 19:32:37.830971 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 2 19:32:37.831043 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 2 19:32:37.831112 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 2 19:32:37.831123 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:32:37.831132 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:32:37.831141 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:32:37.831149 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:32:37.831157 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:32:37.831165 kernel: iommu: Default domain type: Translated Oct 2 19:32:37.831177 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:32:37.831272 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 19:32:37.831367 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:32:37.831463 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 19:32:37.831476 kernel: vgaarb: loaded Oct 2 19:32:37.831497 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:32:37.831506 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:32:37.831529 kernel: PTP clock support registered Oct 2 19:32:37.831537 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:32:37.831549 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:32:37.831556 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 19:32:37.831564 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Oct 2 19:32:37.831570 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 2 19:32:37.831577 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 2 19:32:37.831584 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:32:37.831591 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:32:37.831598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:32:37.831605 kernel: pnp: PnP ACPI init Oct 2 19:32:37.831832 kernel: pnp 00:02: [dma 2] Oct 2 19:32:37.831853 kernel: pnp: PnP ACPI: found 6 devices Oct 2 19:32:37.831863 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:32:37.831871 kernel: NET: Registered PF_INET protocol family Oct 2 19:32:37.831878 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:32:37.831886 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:32:37.831893 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:32:37.831899 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:32:37.831909 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:32:37.831916 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:32:37.831923 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:32:37.831930 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:32:37.831937 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:32:37.831943 kernel: NET: Registered PF_XDP protocol family Oct 2 19:32:37.832015 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:32:37.832138 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:32:37.832200 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:32:37.832281 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Oct 2 19:32:37.836703 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 2 19:32:37.836854 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 19:32:37.836969 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:32:37.837079 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:32:37.837095 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:32:37.837107 kernel: Initialise system trusted keyrings Oct 2 19:32:37.837124 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:32:37.837135 kernel: Key type asymmetric registered Oct 2 19:32:37.837145 kernel: Asymmetric key parser 'x509' registered Oct 2 19:32:37.837155 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:32:37.837166 kernel: io scheduler mq-deadline registered Oct 2 19:32:37.837184 kernel: io scheduler kyber registered Oct 2 19:32:37.837194 kernel: io scheduler bfq registered Oct 2 19:32:37.837204 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:32:37.837214 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:32:37.837224 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:32:37.837237 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:32:37.837247 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:32:37.837257 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:32:37.837267 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:32:37.837276 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:32:37.837285 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:32:37.837296 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:32:37.837411 kernel: rtc_cmos 00:05: RTC can wake from S4 Oct 2 19:32:37.837532 kernel: rtc_cmos 00:05: registered as rtc0 Oct 2 19:32:37.837622 kernel: rtc_cmos 00:05: setting system clock to 2023-10-02T19:32:37 UTC (1696275157) Oct 2 19:32:37.837712 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 2 19:32:37.837726 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:32:37.837737 kernel: Segment Routing with IPv6 Oct 2 19:32:37.837747 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:32:37.837757 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:32:37.837766 kernel: Key type dns_resolver registered Oct 2 19:32:37.837776 kernel: IPI shorthand broadcast: enabled Oct 2 19:32:37.837789 kernel: sched_clock: Marking stable (631219731, 93603595)->(797712957, -72889631) Oct 2 19:32:37.837799 kernel: registered taskstats version 1 Oct 2 19:32:37.837809 kernel: Loading compiled-in X.509 certificates Oct 2 19:32:37.837819 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:32:37.837828 kernel: Key type .fscrypt registered Oct 2 19:32:37.837838 kernel: Key type fscrypt-provisioning registered Oct 2 19:32:37.837848 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:32:37.837858 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:32:37.837869 kernel: ima: No architecture policies found Oct 2 19:32:37.837879 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:32:37.837889 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:32:37.837899 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:32:37.837909 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:32:37.837918 kernel: Run /init as init process Oct 2 19:32:37.837928 kernel: with arguments: Oct 2 19:32:37.837939 kernel: /init Oct 2 19:32:37.837962 kernel: with environment: Oct 2 19:32:37.837975 kernel: HOME=/ Oct 2 19:32:37.837984 kernel: TERM=linux Oct 2 19:32:37.837995 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:32:37.838008 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:32:37.838021 systemd[1]: Detected virtualization kvm. Oct 2 19:32:37.838033 systemd[1]: Detected architecture x86-64. Oct 2 19:32:37.838043 systemd[1]: Running in initrd. Oct 2 19:32:37.838055 systemd[1]: No hostname configured, using default hostname. Oct 2 19:32:37.838065 systemd[1]: Hostname set to . Oct 2 19:32:37.838077 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:32:37.838104 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:32:37.838116 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:32:37.838127 systemd[1]: Reached target cryptsetup.target. Oct 2 19:32:37.838137 systemd[1]: Reached target paths.target. Oct 2 19:32:37.838148 systemd[1]: Reached target slices.target. Oct 2 19:32:37.838158 systemd[1]: Reached target swap.target. Oct 2 19:32:37.838171 systemd[1]: Reached target timers.target. Oct 2 19:32:37.838182 systemd[1]: Listening on iscsid.socket. Oct 2 19:32:37.838192 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:32:37.838203 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:32:37.838214 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:32:37.838224 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:32:37.838235 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:32:37.838248 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:32:37.838259 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:32:37.838269 systemd[1]: Reached target sockets.target. Oct 2 19:32:37.838280 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:32:37.838291 systemd[1]: Finished network-cleanup.service. Oct 2 19:32:37.838302 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:32:37.838314 systemd[1]: Starting systemd-journald.service... Oct 2 19:32:37.838327 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:32:37.838339 systemd[1]: Starting systemd-resolved.service... Oct 2 19:32:37.838351 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:32:37.838362 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:32:37.838373 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:32:37.838384 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:32:37.838395 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:32:37.838696 systemd-journald[199]: Journal started Oct 2 19:32:37.838763 systemd-journald[199]: Runtime Journal (/run/log/journal/45000b260e2b4b6d95311dd2acb539a4) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:32:37.832450 systemd-modules-load[200]: Inserted module 'overlay' Oct 2 19:32:37.853116 systemd[1]: Started systemd-journald.service. Oct 2 19:32:37.853147 kernel: audit: type=1130 audit(1696275157.848:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.842941 systemd-resolved[201]: Positive Trust Anchors: Oct 2 19:32:37.855242 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:32:37.842951 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:32:37.859190 kernel: audit: type=1130 audit(1696275157.854:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.842977 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:32:37.865786 kernel: Bridge firewalling registered Oct 2 19:32:37.865802 kernel: audit: type=1130 audit(1696275157.859:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.845216 systemd-resolved[201]: Defaulting to hostname 'linux'. Oct 2 19:32:37.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.855724 systemd[1]: Started systemd-resolved.service. Oct 2 19:32:37.870158 kernel: audit: type=1130 audit(1696275157.865:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.860307 systemd-modules-load[200]: Inserted module 'br_netfilter' Oct 2 19:32:37.860392 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:32:37.866271 systemd[1]: Reached target nss-lookup.target. Oct 2 19:32:37.870842 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:32:37.883535 kernel: SCSI subsystem initialized Oct 2 19:32:37.885470 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:32:37.889926 kernel: audit: type=1130 audit(1696275157.885:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.889050 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:32:37.895817 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:32:37.895891 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:32:37.896591 dracut-cmdline[216]: dracut-dracut-053 Oct 2 19:32:37.897556 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:32:37.898401 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:32:37.902047 systemd-modules-load[200]: Inserted module 'dm_multipath' Oct 2 19:32:37.902778 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:32:37.907133 kernel: audit: type=1130 audit(1696275157.902:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.904165 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:32:37.911348 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:32:37.914621 kernel: audit: type=1130 audit(1696275157.911:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:37.960541 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:32:37.980855 kernel: iscsi: registered transport (tcp) Oct 2 19:32:38.000541 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:32:38.000609 kernel: QLogic iSCSI HBA Driver Oct 2 19:32:38.023504 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:32:38.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:38.034335 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:32:38.035258 kernel: audit: type=1130 audit(1696275158.029:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:38.084547 kernel: raid6: avx2x4 gen() 26849 MB/s Oct 2 19:32:38.101545 kernel: raid6: avx2x4 xor() 7974 MB/s Oct 2 19:32:38.118543 kernel: raid6: avx2x2 gen() 30493 MB/s Oct 2 19:32:38.135546 kernel: raid6: avx2x2 xor() 18419 MB/s Oct 2 19:32:38.152541 kernel: raid6: avx2x1 gen() 23128 MB/s Oct 2 19:32:38.169539 kernel: raid6: avx2x1 xor() 15315 MB/s Oct 2 19:32:38.186585 kernel: raid6: sse2x4 gen() 14478 MB/s Oct 2 19:32:38.203545 kernel: raid6: sse2x4 xor() 7412 MB/s Oct 2 19:32:38.220555 kernel: raid6: sse2x2 gen() 15448 MB/s Oct 2 19:32:38.237560 kernel: raid6: sse2x2 xor() 7746 MB/s Oct 2 19:32:38.254553 kernel: raid6: sse2x1 gen() 12008 MB/s Oct 2 19:32:38.271942 kernel: raid6: sse2x1 xor() 7699 MB/s Oct 2 19:32:38.272011 kernel: raid6: using algorithm avx2x2 gen() 30493 MB/s Oct 2 19:32:38.272032 kernel: raid6: .... xor() 18419 MB/s, rmw enabled Oct 2 19:32:38.272042 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:32:38.283535 kernel: xor: automatically using best checksumming function avx Oct 2 19:32:38.379539 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:32:38.385979 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:32:38.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:38.388000 audit: BPF prog-id=7 op=LOAD Oct 2 19:32:38.388000 audit: BPF prog-id=8 op=LOAD Oct 2 19:32:38.389378 systemd[1]: Starting systemd-udevd.service... Oct 2 19:32:38.390453 kernel: audit: type=1130 audit(1696275158.386:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:38.405426 systemd-udevd[400]: Using default interface naming scheme 'v252'. Oct 2 19:32:38.410541 systemd[1]: Started systemd-udevd.service. Oct 2 19:32:38.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:38.413245 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:32:38.421629 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Oct 2 19:32:38.443539 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:32:38.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:38.445655 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:32:38.479361 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:32:38.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:38.504535 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:32:38.515530 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:32:38.515577 kernel: AES CTR mode by8 optimization enabled Oct 2 19:32:38.520529 kernel: libata version 3.00 loaded. Oct 2 19:32:38.524530 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 19:32:38.525523 kernel: scsi host0: ata_piix Oct 2 19:32:38.528569 kernel: scsi host1: ata_piix Oct 2 19:32:38.528722 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Oct 2 19:32:38.528738 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Oct 2 19:32:38.532610 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:32:38.540532 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:32:38.688641 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 2 19:32:38.690562 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 2 19:32:38.711560 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (443) Oct 2 19:32:38.712544 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:32:38.715417 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:32:38.715795 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:32:38.721307 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:32:38.724028 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 2 19:32:38.724243 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 19:32:38.731190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:32:38.732751 systemd[1]: Starting disk-uuid.service... Oct 2 19:32:38.738536 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 2 19:32:38.745535 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:32:38.752550 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:32:38.755549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:32:39.921781 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:32:39.921853 disk-uuid[514]: The operation has completed successfully. Oct 2 19:32:39.962350 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:32:39.962484 systemd[1]: Finished disk-uuid.service. Oct 2 19:32:39.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:39.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:39.992528 systemd[1]: Starting verity-setup.service... Oct 2 19:32:40.020820 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:32:40.136004 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:32:40.137528 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:32:40.143468 systemd[1]: Finished verity-setup.service. Oct 2 19:32:40.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.293950 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:32:40.294252 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:32:40.296012 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:32:40.298581 systemd[1]: Starting ignition-setup.service... Oct 2 19:32:40.302250 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:32:40.311607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:32:40.311682 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:32:40.311692 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:32:40.322626 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:32:40.336089 systemd[1]: Finished ignition-setup.service. Oct 2 19:32:40.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.338039 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:32:40.398177 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:32:40.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.399000 audit: BPF prog-id=9 op=LOAD Oct 2 19:32:40.400709 systemd[1]: Starting systemd-networkd.service... Oct 2 19:32:40.427173 systemd-networkd[693]: lo: Link UP Oct 2 19:32:40.427184 systemd-networkd[693]: lo: Gained carrier Oct 2 19:32:40.427831 systemd-networkd[693]: Enumeration completed Oct 2 19:32:40.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.428215 systemd-networkd[693]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:32:40.430155 systemd-networkd[693]: eth0: Link UP Oct 2 19:32:40.430160 systemd-networkd[693]: eth0: Gained carrier Oct 2 19:32:40.430935 systemd[1]: Started systemd-networkd.service. Oct 2 19:32:40.432725 systemd[1]: Reached target network.target. Oct 2 19:32:40.440425 systemd[1]: Starting iscsiuio.service... Oct 2 19:32:40.447049 systemd[1]: Started iscsiuio.service. Oct 2 19:32:40.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.454157 systemd-networkd[693]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:32:40.457214 systemd[1]: Starting iscsid.service... Oct 2 19:32:40.461013 iscsid[698]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:32:40.461013 iscsid[698]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:32:40.461013 iscsid[698]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:32:40.461013 iscsid[698]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:32:40.461013 iscsid[698]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:32:40.461013 iscsid[698]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:32:40.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.462381 systemd[1]: Started iscsid.service. Oct 2 19:32:40.477722 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:32:40.513681 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:32:40.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.514829 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:32:40.516137 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:32:40.516391 systemd[1]: Reached target remote-fs.target. Oct 2 19:32:40.521798 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:32:40.523266 ignition[610]: Ignition 2.14.0 Oct 2 19:32:40.523274 ignition[610]: Stage: fetch-offline Oct 2 19:32:40.523317 ignition[610]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:32:40.523324 ignition[610]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:32:40.523455 ignition[610]: parsed url from cmdline: "" Oct 2 19:32:40.523460 ignition[610]: no config URL provided Oct 2 19:32:40.523467 ignition[610]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:32:40.523478 ignition[610]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:32:40.523507 ignition[610]: op(1): [started] loading QEMU firmware config module Oct 2 19:32:40.523527 ignition[610]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:32:40.530335 ignition[610]: op(1): [finished] loading QEMU firmware config module Oct 2 19:32:40.544814 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:32:40.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.549108 ignition[610]: parsing config with SHA512: 5144fe1505ec210ec4b02c3de75358db10a63ccf11ef65beec20bab98481d32394c75f43efb4ebf8dcd7a6e500f51cdafd7d03b2ee9a8bc207242aa88d57bd2e Oct 2 19:32:40.576862 unknown[610]: fetched base config from "system" Oct 2 19:32:40.576878 unknown[610]: fetched user config from "qemu" Oct 2 19:32:40.577531 ignition[610]: fetch-offline: fetch-offline passed Oct 2 19:32:40.577619 ignition[610]: Ignition finished successfully Oct 2 19:32:40.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.578654 systemd-resolved[201]: Detected conflict on linux IN A 10.0.0.18 Oct 2 19:32:40.578665 systemd-resolved[201]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Oct 2 19:32:40.579304 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:32:40.579767 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:32:40.580719 systemd[1]: Starting ignition-kargs.service... Oct 2 19:32:40.593894 ignition[714]: Ignition 2.14.0 Oct 2 19:32:40.593911 ignition[714]: Stage: kargs Oct 2 19:32:40.594068 ignition[714]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:32:40.594081 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:32:40.595455 ignition[714]: kargs: kargs passed Oct 2 19:32:40.595543 ignition[714]: Ignition finished successfully Oct 2 19:32:40.598494 systemd[1]: Finished ignition-kargs.service. Oct 2 19:32:40.600623 systemd[1]: Starting ignition-disks.service... Oct 2 19:32:40.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.607820 ignition[720]: Ignition 2.14.0 Oct 2 19:32:40.607832 ignition[720]: Stage: disks Oct 2 19:32:40.608066 ignition[720]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:32:40.608078 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:32:40.609817 ignition[720]: disks: disks passed Oct 2 19:32:40.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.611079 systemd[1]: Finished ignition-disks.service. Oct 2 19:32:40.609968 ignition[720]: Ignition finished successfully Oct 2 19:32:40.612217 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:32:40.612892 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:32:40.614123 systemd[1]: Reached target local-fs.target. Oct 2 19:32:40.616655 systemd[1]: Reached target sysinit.target. Oct 2 19:32:40.618109 systemd[1]: Reached target basic.target. Oct 2 19:32:40.619738 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:32:40.634167 systemd-fsck[728]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:32:40.955229 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:32:40.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:40.957352 systemd[1]: Mounting sysroot.mount... Oct 2 19:32:40.973529 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:32:40.973773 systemd[1]: Mounted sysroot.mount. Oct 2 19:32:40.974097 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:32:40.976086 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:32:40.977634 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:32:40.977792 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:32:40.977818 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:32:40.982539 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:32:40.983980 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:32:40.988363 initrd-setup-root[738]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:32:40.993509 initrd-setup-root[746]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:32:40.999280 initrd-setup-root[754]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:32:41.003228 initrd-setup-root[762]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:32:41.037920 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:32:41.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:41.039104 systemd[1]: Starting ignition-mount.service... Oct 2 19:32:41.041236 systemd[1]: Starting sysroot-boot.service... Oct 2 19:32:41.048343 bash[780]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:32:41.058278 ignition[781]: INFO : Ignition 2.14.0 Oct 2 19:32:41.058278 ignition[781]: INFO : Stage: mount Oct 2 19:32:41.060374 ignition[781]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:32:41.060374 ignition[781]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:32:41.060374 ignition[781]: INFO : mount: mount passed Oct 2 19:32:41.060374 ignition[781]: INFO : Ignition finished successfully Oct 2 19:32:41.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:41.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:41.060574 systemd[1]: Finished ignition-mount.service. Oct 2 19:32:41.061019 systemd[1]: Finished sysroot-boot.service. Oct 2 19:32:41.159245 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:32:41.166532 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (789) Oct 2 19:32:41.166587 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:32:41.207587 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:32:41.207669 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:32:41.211502 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:32:41.212767 systemd[1]: Starting ignition-files.service... Oct 2 19:32:41.226336 ignition[809]: INFO : Ignition 2.14.0 Oct 2 19:32:41.226336 ignition[809]: INFO : Stage: files Oct 2 19:32:41.228281 ignition[809]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:32:41.228281 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:32:41.229924 ignition[809]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:32:41.229924 ignition[809]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:32:41.229924 ignition[809]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:32:41.234450 ignition[809]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:32:41.234450 ignition[809]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:32:41.234450 ignition[809]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:32:41.234450 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:32:41.234450 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 19:32:41.232857 unknown[809]: wrote ssh authorized keys file for user: core Oct 2 19:32:41.454448 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:32:42.135527 systemd-networkd[693]: eth0: Gained IPv6LL Oct 2 19:32:42.254420 ignition[809]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 19:32:42.254420 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:32:42.254420 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:32:42.254420 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:32:42.392434 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:32:42.658487 ignition[809]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 19:32:42.669988 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:32:42.669988 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:32:42.669988 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:32:42.768846 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:32:43.467237 ignition[809]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Oct 2 19:32:43.467237 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:32:43.482401 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:32:43.482401 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:32:43.568735 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:32:45.143707 ignition[809]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Oct 2 19:32:45.143707 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:32:45.143707 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:32:45.147995 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:32:45.147995 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:32:45.147995 ignition[809]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:32:45.147995 ignition[809]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:32:45.147995 ignition[809]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:32:45.154396 ignition[809]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:32:45.154396 ignition[809]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:32:45.154396 ignition[809]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:32:45.154396 ignition[809]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:32:45.154396 ignition[809]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:32:45.154396 ignition[809]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:32:45.161630 ignition[809]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:32:45.161630 ignition[809]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:32:45.161630 ignition[809]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:32:45.161630 ignition[809]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:32:45.161630 ignition[809]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:32:45.167660 ignition[809]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:32:45.167660 ignition[809]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:32:45.167660 ignition[809]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:32:45.167660 ignition[809]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:32:45.167660 ignition[809]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:32:45.346522 ignition[809]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:32:45.347787 ignition[809]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:32:45.347787 ignition[809]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:32:45.347787 ignition[809]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:32:45.347787 ignition[809]: INFO : files: files passed Oct 2 19:32:45.347787 ignition[809]: INFO : Ignition finished successfully Oct 2 19:32:45.357265 systemd[1]: Finished ignition-files.service. Oct 2 19:32:45.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.360013 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:32:45.360047 kernel: audit: type=1130 audit(1696275165.359:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.360620 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:32:45.366164 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:32:45.370118 initrd-setup-root-after-ignition[832]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:32:45.369957 systemd[1]: Starting ignition-quench.service... Oct 2 19:32:45.372861 initrd-setup-root-after-ignition[834]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:32:45.374482 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:32:45.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.376410 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:32:45.376520 systemd[1]: Finished ignition-quench.service. Oct 2 19:32:45.381420 kernel: audit: type=1130 audit(1696275165.375:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.381872 systemd[1]: Reached target ignition-complete.target. Oct 2 19:32:45.389549 kernel: audit: type=1130 audit(1696275165.381:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.389588 kernel: audit: type=1131 audit(1696275165.381:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.389426 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:32:45.408214 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:32:45.408371 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:32:45.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.410904 systemd[1]: Reached target initrd-fs.target. Oct 2 19:32:45.417210 kernel: audit: type=1130 audit(1696275165.410:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.417237 kernel: audit: type=1131 audit(1696275165.410:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.417229 systemd[1]: Reached target initrd.target. Oct 2 19:32:45.418570 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:32:45.420544 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:32:45.441669 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:32:45.447806 kernel: audit: type=1130 audit(1696275165.441:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.448475 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:32:45.461910 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:32:45.462829 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:32:45.464095 systemd[1]: Stopped target timers.target. Oct 2 19:32:45.465588 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:32:45.470244 kernel: audit: type=1131 audit(1696275165.466:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.465745 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:32:45.466882 systemd[1]: Stopped target initrd.target. Oct 2 19:32:45.471073 systemd[1]: Stopped target basic.target. Oct 2 19:32:45.472318 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:32:45.473651 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:32:45.474932 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:32:45.477098 systemd[1]: Stopped target remote-fs.target. Oct 2 19:32:45.478422 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:32:45.479805 systemd[1]: Stopped target sysinit.target. Oct 2 19:32:45.481088 systemd[1]: Stopped target local-fs.target. Oct 2 19:32:45.482353 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:32:45.483633 systemd[1]: Stopped target swap.target. Oct 2 19:32:45.484811 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:32:45.490754 kernel: audit: type=1131 audit(1696275165.486:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.484953 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:32:45.486303 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:32:45.495935 kernel: audit: type=1131 audit(1696275165.491:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.490837 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:32:45.490982 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:32:45.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.492363 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:32:45.492486 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:32:45.496204 systemd[1]: Stopped target paths.target. Oct 2 19:32:45.497722 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:32:45.502722 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:32:45.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.503236 systemd[1]: Stopped target slices.target. Oct 2 19:32:45.503607 systemd[1]: Stopped target sockets.target. Oct 2 19:32:45.503730 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:32:45.503811 systemd[1]: Closed iscsid.socket. Oct 2 19:32:45.504041 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:32:45.504148 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:32:45.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.516829 ignition[849]: INFO : Ignition 2.14.0 Oct 2 19:32:45.516829 ignition[849]: INFO : Stage: umount Oct 2 19:32:45.516829 ignition[849]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:32:45.516829 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:32:45.516829 ignition[849]: INFO : umount: umount passed Oct 2 19:32:45.516829 ignition[849]: INFO : Ignition finished successfully Oct 2 19:32:45.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.504327 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:32:45.504423 systemd[1]: Stopped ignition-files.service. Oct 2 19:32:45.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.505548 systemd[1]: Stopping ignition-mount.service... Oct 2 19:32:45.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.507132 systemd[1]: Stopping iscsiuio.service... Oct 2 19:32:45.509975 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:32:45.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.512538 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:32:45.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.512762 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:32:45.515234 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:32:45.515436 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:32:45.520781 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:32:45.520894 systemd[1]: Stopped iscsiuio.service. Oct 2 19:32:45.523828 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:32:45.523924 systemd[1]: Stopped ignition-mount.service. Oct 2 19:32:45.525736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:32:45.525817 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:32:45.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.527116 systemd[1]: Stopped target network.target. Oct 2 19:32:45.527497 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:32:45.527545 systemd[1]: Closed iscsiuio.socket. Oct 2 19:32:45.529035 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:32:45.529087 systemd[1]: Stopped ignition-disks.service. Oct 2 19:32:45.530089 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:32:45.530132 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:32:45.537566 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:32:45.537633 systemd[1]: Stopped ignition-setup.service. Oct 2 19:32:45.538936 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:32:45.540489 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:32:45.547842 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:32:45.547950 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:32:45.564042 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:32:45.568634 systemd-networkd[693]: eth0: DHCPv6 lease lost Oct 2 19:32:45.568000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:32:45.571733 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:32:45.571858 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:32:45.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.574612 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:32:45.574679 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:32:45.580419 systemd[1]: Stopping network-cleanup.service... Oct 2 19:32:45.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.581104 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:32:45.583000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:32:45.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.581162 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:32:45.582725 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:32:45.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.582773 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:32:45.584546 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:32:45.584626 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:32:45.585192 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:32:45.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.588501 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:32:45.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.589566 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:32:45.589663 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:32:45.591009 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:32:45.591136 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:32:45.595848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:32:45.595902 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:32:45.597818 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:32:45.597861 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:32:45.599322 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:32:45.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.599377 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:32:45.601188 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:32:45.601898 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:32:45.602966 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:32:45.602997 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:32:45.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.605580 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:32:45.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.605615 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:32:45.608320 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:32:45.610398 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:32:45.610469 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:32:45.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.613028 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:32:45.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.613074 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:32:45.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.613878 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:32:45.613920 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:32:45.618132 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:32:45.620198 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:32:45.621150 systemd[1]: Stopped network-cleanup.service. Oct 2 19:32:45.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.622687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:32:45.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:45.622772 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:32:45.625352 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:32:45.631162 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:32:45.645953 systemd[1]: Switching root. Oct 2 19:32:45.667831 iscsid[698]: iscsid shutting down. Oct 2 19:32:45.668880 systemd-journald[199]: Received SIGTERM from PID 1 (n/a). Oct 2 19:32:45.668937 systemd-journald[199]: Journal stopped Oct 2 19:32:49.496438 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:32:49.496482 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:32:49.496492 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:32:49.496522 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:32:49.496532 kernel: SELinux: policy capability open_perms=1 Oct 2 19:32:49.496541 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:32:49.496552 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:32:49.496561 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:32:49.496570 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:32:49.496583 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:32:49.496592 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:32:49.496602 systemd[1]: Successfully loaded SELinux policy in 64.725ms. Oct 2 19:32:49.496623 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.399ms. Oct 2 19:32:49.496635 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:32:49.496646 systemd[1]: Detected virtualization kvm. Oct 2 19:32:49.496656 systemd[1]: Detected architecture x86-64. Oct 2 19:32:49.496666 systemd[1]: Detected first boot. Oct 2 19:32:49.496675 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:32:49.496685 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:32:49.496697 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:32:49.496709 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:32:49.496722 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:32:49.496736 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:32:49.496745 systemd[1]: Stopped iscsid.service. Oct 2 19:32:49.496755 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:32:49.496765 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:32:49.496777 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:32:49.496787 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:32:49.496797 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:32:49.496807 systemd[1]: Created slice system-getty.slice. Oct 2 19:32:49.496817 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:32:49.496826 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:32:49.496836 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:32:49.496846 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:32:49.496856 systemd[1]: Created slice user.slice. Oct 2 19:32:49.496867 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:32:49.496877 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:32:49.496888 systemd[1]: Set up automount boot.automount. Oct 2 19:32:49.496898 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:32:49.496908 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:32:49.496918 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:32:49.496928 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:32:49.496938 systemd[1]: Reached target integritysetup.target. Oct 2 19:32:49.496948 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:32:49.496961 systemd[1]: Reached target remote-fs.target. Oct 2 19:32:49.496971 systemd[1]: Reached target slices.target. Oct 2 19:32:49.496980 systemd[1]: Reached target swap.target. Oct 2 19:32:49.496990 systemd[1]: Reached target torcx.target. Oct 2 19:32:49.497000 systemd[1]: Reached target veritysetup.target. Oct 2 19:32:49.497010 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:32:49.497019 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:32:49.497029 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:32:49.497039 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:32:49.497050 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:32:49.497061 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:32:49.497070 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:32:49.497080 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:32:49.497090 systemd[1]: Mounting media.mount... Oct 2 19:32:49.497100 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:32:49.497111 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:32:49.497121 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:32:49.497131 systemd[1]: Mounting tmp.mount... Oct 2 19:32:49.497142 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:32:49.497152 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:32:49.497168 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:32:49.497178 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:32:49.497188 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:32:49.497198 systemd[1]: Starting modprobe@drm.service... Oct 2 19:32:49.497207 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:32:49.497218 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:32:49.497228 systemd[1]: Starting modprobe@loop.service... Oct 2 19:32:49.497239 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:32:49.497249 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:32:49.497259 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:32:49.497269 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:32:49.497279 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:32:49.497290 systemd[1]: Stopped systemd-journald.service. Oct 2 19:32:49.497300 systemd[1]: Starting systemd-journald.service... Oct 2 19:32:49.497310 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:32:49.497320 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:32:49.497331 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:32:49.497340 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:32:49.497350 kernel: loop: module loaded Oct 2 19:32:49.497361 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:32:49.497371 systemd[1]: Stopped verity-setup.service. Oct 2 19:32:49.497380 kernel: fuse: init (API version 7.34) Oct 2 19:32:49.497390 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:32:49.497401 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:32:49.497411 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:32:49.497422 systemd[1]: Mounted media.mount. Oct 2 19:32:49.497432 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:32:49.497444 systemd-journald[955]: Journal started Oct 2 19:32:49.497479 systemd-journald[955]: Runtime Journal (/run/log/journal/45000b260e2b4b6d95311dd2acb539a4) is 6.0M, max 48.5M, 42.5M free. Oct 2 19:32:45.808000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:32:46.080000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:32:46.082000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:32:46.083000 audit: BPF prog-id=10 op=LOAD Oct 2 19:32:46.083000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:32:46.083000 audit: BPF prog-id=11 op=LOAD Oct 2 19:32:46.083000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:32:49.299000 audit: BPF prog-id=12 op=LOAD Oct 2 19:32:49.299000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:32:49.299000 audit: BPF prog-id=13 op=LOAD Oct 2 19:32:49.299000 audit: BPF prog-id=14 op=LOAD Oct 2 19:32:49.299000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:32:49.299000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:32:49.300000 audit: BPF prog-id=15 op=LOAD Oct 2 19:32:49.300000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:32:49.301000 audit: BPF prog-id=16 op=LOAD Oct 2 19:32:49.301000 audit: BPF prog-id=17 op=LOAD Oct 2 19:32:49.301000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:32:49.301000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:32:49.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.326000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:32:49.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.415000 audit: BPF prog-id=18 op=LOAD Oct 2 19:32:49.416000 audit: BPF prog-id=19 op=LOAD Oct 2 19:32:49.416000 audit: BPF prog-id=20 op=LOAD Oct 2 19:32:49.416000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:32:49.416000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:32:49.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.495000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:32:49.495000 audit[955]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffec6dea330 a2=4000 a3=7ffec6dea3cc items=0 ppid=1 pid=955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:49.495000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:32:49.298601 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:32:46.171285 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:32:49.298611 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:32:46.171626 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:32:49.302456 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:32:46.171651 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:32:46.171692 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:32:46.171705 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:32:46.171743 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:32:49.498905 systemd[1]: Started systemd-journald.service. Oct 2 19:32:46.171759 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:32:46.172012 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:32:46.172059 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:32:46.172075 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:32:46.172542 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:32:46.172585 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:32:46.172793 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:32:46.172817 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:32:46.172840 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:32:49.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:46.172857 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:32:49.027778 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:49Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:32:49.028047 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:49Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:32:49.028157 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:49Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:32:49.028342 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:49Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:32:49.028388 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:49Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:32:49.028451 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2023-10-02T19:32:49Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:32:49.500227 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:32:49.500915 systemd[1]: Mounted tmp.mount. Oct 2 19:32:49.501830 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:32:49.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.502797 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:32:49.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.503717 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:32:49.503919 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:32:49.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.504815 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:32:49.505037 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:32:49.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.505875 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:32:49.506047 systemd[1]: Finished modprobe@drm.service. Oct 2 19:32:49.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.506750 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:32:49.506878 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:32:49.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.507625 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:32:49.507781 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:32:49.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.508472 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:32:49.508751 systemd[1]: Finished modprobe@loop.service. Oct 2 19:32:49.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.509563 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:32:49.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.510424 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:32:49.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.511301 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:32:49.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.512232 systemd[1]: Reached target network-pre.target. Oct 2 19:32:49.513766 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:32:49.515270 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:32:49.515860 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:32:49.517046 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:32:49.518674 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:32:49.519346 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:32:49.520144 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:32:49.520911 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:32:49.521737 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:32:49.523983 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:32:49.525819 systemd-journald[955]: Time spent on flushing to /var/log/journal/45000b260e2b4b6d95311dd2acb539a4 is 74.030ms for 1091 entries. Oct 2 19:32:49.525819 systemd-journald[955]: System Journal (/var/log/journal/45000b260e2b4b6d95311dd2acb539a4) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:32:49.887005 systemd-journald[955]: Received client request to flush runtime journal. Oct 2 19:32:49.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:49.526334 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:32:49.887742 udevadm[986]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:32:49.528223 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:32:49.553507 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:32:49.570685 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:32:49.726032 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:32:49.731721 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:32:49.734230 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:32:49.753274 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:32:49.761977 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:32:49.762730 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:32:49.888796 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:32:49.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.508738 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:32:51.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.519774 kernel: kauditd_printk_skb: 92 callbacks suppressed Oct 2 19:32:51.519817 kernel: audit: type=1130 audit(1696275171.519:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.521000 audit: BPF prog-id=21 op=LOAD Oct 2 19:32:51.522523 kernel: audit: type=1334 audit(1696275171.521:135): prog-id=21 op=LOAD Oct 2 19:32:51.522553 kernel: audit: type=1334 audit(1696275171.522:136): prog-id=22 op=LOAD Oct 2 19:32:51.522000 audit: BPF prog-id=22 op=LOAD Oct 2 19:32:51.523105 systemd[1]: Starting systemd-udevd.service... Oct 2 19:32:51.522000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:32:51.522000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:32:51.523531 kernel: audit: type=1334 audit(1696275171.522:137): prog-id=7 op=UNLOAD Oct 2 19:32:51.523558 kernel: audit: type=1334 audit(1696275171.522:138): prog-id=8 op=UNLOAD Oct 2 19:32:51.538740 systemd-udevd[991]: Using default interface naming scheme 'v252'. Oct 2 19:32:51.566901 systemd[1]: Started systemd-udevd.service. Oct 2 19:32:51.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.570987 kernel: audit: type=1130 audit(1696275171.566:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.571000 audit: BPF prog-id=23 op=LOAD Oct 2 19:32:51.589392 systemd[1]: Starting systemd-networkd.service... Oct 2 19:32:51.590132 kernel: audit: type=1334 audit(1696275171.571:140): prog-id=23 op=LOAD Oct 2 19:32:51.592000 audit: BPF prog-id=24 op=LOAD Oct 2 19:32:51.593000 audit: BPF prog-id=25 op=LOAD Oct 2 19:32:51.595812 kernel: audit: type=1334 audit(1696275171.592:141): prog-id=24 op=LOAD Oct 2 19:32:51.595837 kernel: audit: type=1334 audit(1696275171.593:142): prog-id=25 op=LOAD Oct 2 19:32:51.595871 kernel: audit: type=1334 audit(1696275171.594:143): prog-id=26 op=LOAD Oct 2 19:32:51.594000 audit: BPF prog-id=26 op=LOAD Oct 2 19:32:51.596036 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:32:51.597859 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:32:51.624311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:32:51.630719 systemd[1]: Started systemd-userdbd.service. Oct 2 19:32:51.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.656557 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:32:51.676655 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:32:51.696000 audit[1003]: AVC avc: denied { confidentiality } for pid=1003 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:32:51.696000 audit[1003]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e588dcabb0 a1=32194 a2=7f6d2d408bc5 a3=5 items=106 ppid=991 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:51.696000 audit: CWD cwd="/" Oct 2 19:32:51.696000 audit: PATH item=0 name=(null) inode=11191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=1 name=(null) inode=11192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=2 name=(null) inode=11191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=3 name=(null) inode=11193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=4 name=(null) inode=11191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=5 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=6 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=7 name=(null) inode=11195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=8 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=9 name=(null) inode=11196 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=10 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=11 name=(null) inode=11197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=12 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=13 name=(null) inode=11198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=14 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=15 name=(null) inode=11199 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=16 name=(null) inode=11191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=17 name=(null) inode=11200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=18 name=(null) inode=11200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=19 name=(null) inode=11201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=20 name=(null) inode=11200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=21 name=(null) inode=11202 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=22 name=(null) inode=11200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=23 name=(null) inode=11203 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=24 name=(null) inode=11200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=25 name=(null) inode=11204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=26 name=(null) inode=11200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=27 name=(null) inode=11205 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=28 name=(null) inode=11191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=29 name=(null) inode=11206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=30 name=(null) inode=11206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=31 name=(null) inode=11207 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=32 name=(null) inode=11206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=33 name=(null) inode=11208 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=34 name=(null) inode=11206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=35 name=(null) inode=11209 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=36 name=(null) inode=11206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=37 name=(null) inode=11210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=38 name=(null) inode=11206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=39 name=(null) inode=11211 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=40 name=(null) inode=11191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=41 name=(null) inode=11212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=42 name=(null) inode=11212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=43 name=(null) inode=11213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=44 name=(null) inode=11212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=45 name=(null) inode=11214 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=46 name=(null) inode=11212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=47 name=(null) inode=11215 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=48 name=(null) inode=11212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=49 name=(null) inode=11216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=50 name=(null) inode=11212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=51 name=(null) inode=11217 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=52 name=(null) inode=2063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=53 name=(null) inode=11218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=54 name=(null) inode=11218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=55 name=(null) inode=11219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=56 name=(null) inode=11218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=57 name=(null) inode=11220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=58 name=(null) inode=11218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=59 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=60 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=61 name=(null) inode=11222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=62 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=63 name=(null) inode=11223 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=64 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=65 name=(null) inode=11224 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=66 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=67 name=(null) inode=11225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=68 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=69 name=(null) inode=11226 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=70 name=(null) inode=11218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=71 name=(null) inode=11227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=72 name=(null) inode=11227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=73 name=(null) inode=11228 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=74 name=(null) inode=11227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=75 name=(null) inode=11229 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=76 name=(null) inode=11227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=77 name=(null) inode=11230 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=78 name=(null) inode=11227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=79 name=(null) inode=11231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=80 name=(null) inode=11227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=81 name=(null) inode=11232 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=82 name=(null) inode=11218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=83 name=(null) inode=11233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=84 name=(null) inode=11233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=85 name=(null) inode=11234 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=86 name=(null) inode=11233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=87 name=(null) inode=11235 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=88 name=(null) inode=11233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=89 name=(null) inode=11236 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=90 name=(null) inode=11233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=91 name=(null) inode=11237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=92 name=(null) inode=11233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=93 name=(null) inode=11238 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=94 name=(null) inode=11218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=95 name=(null) inode=11239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=96 name=(null) inode=11239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=97 name=(null) inode=11240 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=98 name=(null) inode=11239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=99 name=(null) inode=11241 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=100 name=(null) inode=11239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=101 name=(null) inode=11242 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=102 name=(null) inode=11239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=103 name=(null) inode=11243 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=104 name=(null) inode=11239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PATH item=105 name=(null) inode=11244 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:51.696000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:32:51.804136 systemd-networkd[1006]: lo: Link UP Oct 2 19:32:51.804141 systemd-networkd[1006]: lo: Gained carrier Oct 2 19:32:51.804725 systemd-networkd[1006]: Enumeration completed Oct 2 19:32:51.804809 systemd[1]: Started systemd-networkd.service. Oct 2 19:32:51.817123 systemd-networkd[1006]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:32:51.818465 systemd-networkd[1006]: eth0: Link UP Oct 2 19:32:51.818475 systemd-networkd[1006]: eth0: Gained carrier Oct 2 19:32:51.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.825583 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 19:32:51.829541 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 2 19:32:51.831536 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:32:51.832636 systemd-networkd[1006]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:32:51.889211 kernel: kvm: Nested Virtualization enabled Oct 2 19:32:51.889299 kernel: SVM: kvm: Nested Paging enabled Oct 2 19:32:51.904526 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:32:51.921917 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:32:51.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.923677 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:32:51.939928 lvm[1026]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:32:51.964506 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:32:51.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.965406 systemd[1]: Reached target cryptsetup.target. Oct 2 19:32:51.967002 systemd[1]: Starting lvm2-activation.service... Oct 2 19:32:51.970273 lvm[1027]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:32:51.995732 systemd[1]: Finished lvm2-activation.service. Oct 2 19:32:51.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:51.997969 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:32:51.998650 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:32:51.998682 systemd[1]: Reached target local-fs.target. Oct 2 19:32:51.999283 systemd[1]: Reached target machines.target. Oct 2 19:32:52.004800 systemd[1]: Starting ldconfig.service... Oct 2 19:32:52.005816 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:32:52.005866 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:32:52.007426 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:32:52.009369 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:32:52.011106 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:32:52.011807 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:32:52.011834 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:32:52.012755 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:32:52.018592 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1029 (bootctl) Oct 2 19:32:52.020406 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:32:52.023957 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:32:52.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:52.025727 systemd-tmpfiles[1032]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:32:52.027022 systemd-tmpfiles[1032]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:32:52.032680 systemd-tmpfiles[1032]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:32:52.068820 systemd-fsck[1037]: fsck.fat 4.2 (2021-01-31) Oct 2 19:32:52.068820 systemd-fsck[1037]: /dev/vda1: 789 files, 115069/258078 clusters Oct 2 19:32:52.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:52.071462 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:32:52.073937 systemd[1]: Mounting boot.mount... Oct 2 19:32:52.349790 systemd[1]: Mounted boot.mount. Oct 2 19:32:52.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:52.361034 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:32:52.406987 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:32:52.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:52.409109 systemd[1]: Starting audit-rules.service... Oct 2 19:32:52.410559 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:32:52.412211 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:32:52.413000 audit: BPF prog-id=27 op=LOAD Oct 2 19:32:52.414357 systemd[1]: Starting systemd-resolved.service... Oct 2 19:32:52.485000 audit: BPF prog-id=28 op=LOAD Oct 2 19:32:52.486808 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:32:52.488250 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:32:52.489186 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:32:52.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:52.489992 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:32:52.497000 audit[1054]: SYSTEM_BOOT pid=1054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:32:52.500229 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:32:52.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:52.539033 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:32:52.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:52.547000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:32:52.547000 audit[1062]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd96368070 a2=420 a3=0 items=0 ppid=1042 pid=1062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:52.547000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:32:52.548174 augenrules[1062]: No rules Oct 2 19:32:52.548690 systemd[1]: Finished audit-rules.service. Oct 2 19:32:52.550506 systemd-resolved[1046]: Positive Trust Anchors: Oct 2 19:32:52.550533 systemd-resolved[1046]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:32:52.550566 systemd-resolved[1046]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:32:52.550908 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:32:52.551807 systemd[1]: Reached target time-set.target. Oct 2 19:32:52.553258 systemd-timesyncd[1053]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:32:52.553316 systemd-timesyncd[1053]: Initial clock synchronization to Mon 2023-10-02 19:32:52.749992 UTC. Oct 2 19:32:52.565812 systemd-resolved[1046]: Defaulting to hostname 'linux'. Oct 2 19:32:52.567140 systemd[1]: Started systemd-resolved.service. Oct 2 19:32:52.567926 systemd[1]: Reached target network.target. Oct 2 19:32:52.568473 systemd[1]: Reached target nss-lookup.target. Oct 2 19:32:52.790669 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:32:52.791235 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:32:52.874735 ldconfig[1028]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:32:52.919909 systemd[1]: Finished ldconfig.service. Oct 2 19:32:52.923042 systemd[1]: Starting systemd-update-done.service... Oct 2 19:32:52.928202 systemd[1]: Finished systemd-update-done.service. Oct 2 19:32:52.928954 systemd[1]: Reached target sysinit.target. Oct 2 19:32:52.929612 systemd[1]: Started motdgen.path. Oct 2 19:32:52.930138 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:32:52.931091 systemd[1]: Started logrotate.timer. Oct 2 19:32:52.931752 systemd[1]: Started mdadm.timer. Oct 2 19:32:52.932304 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:32:52.932930 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:32:52.932959 systemd[1]: Reached target paths.target. Oct 2 19:32:52.933541 systemd[1]: Reached target timers.target. Oct 2 19:32:52.934425 systemd[1]: Listening on dbus.socket. Oct 2 19:32:52.935880 systemd[1]: Starting docker.socket... Oct 2 19:32:52.938495 systemd[1]: Listening on sshd.socket. Oct 2 19:32:52.939125 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:32:52.939436 systemd[1]: Listening on docker.socket. Oct 2 19:32:52.940018 systemd[1]: Reached target sockets.target. Oct 2 19:32:52.941056 systemd[1]: Reached target basic.target. Oct 2 19:32:52.941637 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:32:52.941656 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:32:52.942316 systemd[1]: Starting containerd.service... Oct 2 19:32:52.943738 systemd[1]: Starting dbus.service... Oct 2 19:32:52.945156 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:32:52.946444 systemd[1]: Starting extend-filesystems.service... Oct 2 19:32:52.947051 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:32:52.947894 systemd[1]: Starting motdgen.service... Oct 2 19:32:52.949884 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:32:52.951692 systemd[1]: Starting prepare-critools.service... Oct 2 19:32:52.953442 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:32:52.956881 systemd[1]: Starting sshd-keygen.service... Oct 2 19:32:52.960000 jq[1073]: false Oct 2 19:32:52.961911 systemd[1]: Starting systemd-logind.service... Oct 2 19:32:52.962608 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:32:52.962657 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:32:52.965703 extend-filesystems[1074]: Found sr0 Oct 2 19:32:52.965703 extend-filesystems[1074]: Found vda Oct 2 19:32:52.965703 extend-filesystems[1074]: Found vda1 Oct 2 19:32:52.965703 extend-filesystems[1074]: Found vda2 Oct 2 19:32:52.965703 extend-filesystems[1074]: Found vda3 Oct 2 19:32:52.965703 extend-filesystems[1074]: Found usr Oct 2 19:32:52.965703 extend-filesystems[1074]: Found vda4 Oct 2 19:32:52.965703 extend-filesystems[1074]: Found vda6 Oct 2 19:32:52.965703 extend-filesystems[1074]: Found vda7 Oct 2 19:32:52.963152 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:32:53.055474 extend-filesystems[1074]: Found vda9 Oct 2 19:32:53.055474 extend-filesystems[1074]: Checking size of /dev/vda9 Oct 2 19:32:52.964049 systemd[1]: Starting update-engine.service... Oct 2 19:32:53.056799 jq[1092]: true Oct 2 19:32:52.966050 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:32:52.972929 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:32:52.973231 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:32:53.057276 tar[1096]: ./ Oct 2 19:32:53.057276 tar[1096]: ./loopback Oct 2 19:32:52.975521 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:32:53.057581 tar[1097]: crictl Oct 2 19:32:52.975738 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:32:52.987226 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:32:53.057913 jq[1102]: true Oct 2 19:32:52.987441 systemd[1]: Finished motdgen.service. Oct 2 19:32:53.077347 dbus-daemon[1072]: [system] SELinux support is enabled Oct 2 19:32:53.077555 systemd[1]: Started dbus.service. Oct 2 19:32:53.080397 tar[1096]: ./bandwidth Oct 2 19:32:53.079991 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:32:53.080010 systemd[1]: Reached target system-config.target. Oct 2 19:32:53.080661 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:32:53.080679 systemd[1]: Reached target user-config.target. Oct 2 19:32:53.154787 tar[1096]: ./ptp Oct 2 19:32:53.169443 extend-filesystems[1074]: Old size kept for /dev/vda9 Oct 2 19:32:53.169070 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:32:53.169215 systemd[1]: Finished extend-filesystems.service. Oct 2 19:32:53.219723 tar[1096]: ./vlan Oct 2 19:32:53.222091 systemd-logind[1084]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:32:53.222396 systemd-logind[1084]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:32:53.222660 systemd-logind[1084]: New seat seat0. Oct 2 19:32:53.224928 systemd[1]: Started systemd-logind.service. Oct 2 19:32:53.232775 env[1101]: time="2023-10-02T19:32:53.232254709Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:32:53.241823 update_engine[1086]: I1002 19:32:53.241323 1086 main.cc:92] Flatcar Update Engine starting Oct 2 19:32:53.246891 systemd[1]: Started update-engine.service. Oct 2 19:32:53.249278 systemd[1]: Started locksmithd.service. Oct 2 19:32:53.250140 update_engine[1086]: I1002 19:32:53.250113 1086 update_check_scheduler.cc:74] Next update check in 11m12s Oct 2 19:32:53.259308 tar[1096]: ./host-device Oct 2 19:32:53.269287 env[1101]: time="2023-10-02T19:32:53.269231110Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:32:53.269424 env[1101]: time="2023-10-02T19:32:53.269391424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:53.271428 env[1101]: time="2023-10-02T19:32:53.271390345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:32:53.271508 env[1101]: time="2023-10-02T19:32:53.271488758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:53.271821 env[1101]: time="2023-10-02T19:32:53.271800760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:32:53.271896 env[1101]: time="2023-10-02T19:32:53.271877124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:53.271981 env[1101]: time="2023-10-02T19:32:53.271960550Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:32:53.272052 env[1101]: time="2023-10-02T19:32:53.272033597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:53.272192 env[1101]: time="2023-10-02T19:32:53.272173882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:53.272489 env[1101]: time="2023-10-02T19:32:53.272466278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:53.272706 env[1101]: time="2023-10-02T19:32:53.272687176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:32:53.272785 env[1101]: time="2023-10-02T19:32:53.272765767Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:32:53.272900 env[1101]: time="2023-10-02T19:32:53.272881989Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:32:53.272973 env[1101]: time="2023-10-02T19:32:53.272954955Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:32:53.305216 tar[1096]: ./tuning Oct 2 19:32:53.342090 tar[1096]: ./vrf Oct 2 19:32:53.405413 tar[1096]: ./sbr Oct 2 19:32:53.459573 tar[1096]: ./tap Oct 2 19:32:53.473552 env[1101]: time="2023-10-02T19:32:53.473485060Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:32:53.473552 env[1101]: time="2023-10-02T19:32:53.473551158Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:32:53.473714 env[1101]: time="2023-10-02T19:32:53.473564545Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:32:53.473714 env[1101]: time="2023-10-02T19:32:53.473689298Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:32:53.473714 env[1101]: time="2023-10-02T19:32:53.473711738Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:32:53.473793 env[1101]: time="2023-10-02T19:32:53.473724200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:32:53.473793 env[1101]: time="2023-10-02T19:32:53.473736384Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:32:53.473793 env[1101]: time="2023-10-02T19:32:53.473748949Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:32:53.473793 env[1101]: time="2023-10-02T19:32:53.473760673Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:32:53.473793 env[1101]: time="2023-10-02T19:32:53.473774530Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:32:53.473793 env[1101]: time="2023-10-02T19:32:53.473787311Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.473798859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.473917875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.473983951Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474207837Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474231221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474242082Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474284016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474295082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474307667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474317450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474328382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474338914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474348574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.475417 env[1101]: time="2023-10-02T19:32:53.474357966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.475829 env[1101]: time="2023-10-02T19:32:53.475802144Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:32:53.476855 env[1101]: time="2023-10-02T19:32:53.475931989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.476855 env[1101]: time="2023-10-02T19:32:53.475948794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.476855 env[1101]: time="2023-10-02T19:32:53.475960691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.476855 env[1101]: time="2023-10-02T19:32:53.475970812Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:32:53.476855 env[1101]: time="2023-10-02T19:32:53.475986385Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:32:53.476855 env[1101]: time="2023-10-02T19:32:53.475997400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:32:53.476855 env[1101]: time="2023-10-02T19:32:53.476015220Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:32:53.476855 env[1101]: time="2023-10-02T19:32:53.476049814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:32:53.478009 systemd[1]: Started containerd.service. Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.476234609Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477093893Z" level=info msg="Connect containerd service" Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477146523Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477608049Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477760991Z" level=info msg="Start subscribing containerd event" Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477841409Z" level=info msg="Start recovering state" Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477858921Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477888886Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477912711Z" level=info msg="Start event monitor" Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477926066Z" level=info msg="Start snapshots syncer" Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477936598Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:32:53.478530 env[1101]: time="2023-10-02T19:32:53.477943168Z" level=info msg="Start streaming server" Oct 2 19:32:53.480886 env[1101]: time="2023-10-02T19:32:53.478682718Z" level=info msg="containerd successfully booted in 0.247390s" Oct 2 19:32:53.484076 bash[1124]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:32:53.485877 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:32:53.504753 tar[1096]: ./dhcp Oct 2 19:32:53.507556 locksmithd[1127]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:32:53.633169 tar[1096]: ./static Oct 2 19:32:53.696355 tar[1096]: ./firewall Oct 2 19:32:53.718757 systemd-networkd[1006]: eth0: Gained IPv6LL Oct 2 19:32:53.747897 tar[1096]: ./macvlan Oct 2 19:32:53.796399 tar[1096]: ./dummy Oct 2 19:32:53.837113 systemd[1]: Finished prepare-critools.service. Oct 2 19:32:53.857700 tar[1096]: ./bridge Oct 2 19:32:53.915093 tar[1096]: ./ipvlan Oct 2 19:32:53.962183 tar[1096]: ./portmap Oct 2 19:32:54.015034 tar[1096]: ./host-local Oct 2 19:32:54.089445 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:32:54.124338 sshd_keygen[1094]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:32:54.146638 systemd[1]: Finished sshd-keygen.service. Oct 2 19:32:54.148616 systemd[1]: Starting issuegen.service... Oct 2 19:32:54.154160 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:32:54.154340 systemd[1]: Finished issuegen.service. Oct 2 19:32:54.156391 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:32:54.161951 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:32:54.164087 systemd[1]: Started getty@tty1.service. Oct 2 19:32:54.165677 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:32:54.166389 systemd[1]: Reached target getty.target. Oct 2 19:32:54.166989 systemd[1]: Reached target multi-user.target. Oct 2 19:32:54.168542 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:32:54.175973 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:32:54.176154 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:32:54.177036 systemd[1]: Startup finished in 809ms (kernel) + 8.040s (initrd) + 8.450s (userspace) = 17.300s. Oct 2 19:33:02.299172 systemd[1]: Created slice system-sshd.slice. Oct 2 19:33:02.300075 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:60294.service. Oct 2 19:33:02.342355 sshd[1154]: Accepted publickey for core from 10.0.0.1 port 60294 ssh2: RSA SHA256:b8JkJ8STGPpktId2vNwDpv0odk05FCpuAkJstdQTTnk Oct 2 19:33:02.343846 sshd[1154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:33:02.351815 systemd[1]: Created slice user-500.slice. Oct 2 19:33:02.352819 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:33:02.354557 systemd-logind[1084]: New session 1 of user core. Oct 2 19:33:02.360359 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:33:02.361593 systemd[1]: Starting user@500.service... Oct 2 19:33:02.364405 (systemd)[1157]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:33:02.433708 systemd[1157]: Queued start job for default target default.target. Oct 2 19:33:02.434138 systemd[1157]: Reached target paths.target. Oct 2 19:33:02.434157 systemd[1157]: Reached target sockets.target. Oct 2 19:33:02.434168 systemd[1157]: Reached target timers.target. Oct 2 19:33:02.434178 systemd[1157]: Reached target basic.target. Oct 2 19:33:02.434210 systemd[1157]: Reached target default.target. Oct 2 19:33:02.434230 systemd[1157]: Startup finished in 63ms. Oct 2 19:33:02.434325 systemd[1]: Started user@500.service. Oct 2 19:33:02.435238 systemd[1]: Started session-1.scope. Oct 2 19:33:02.486422 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:60310.service. Oct 2 19:33:02.521260 sshd[1166]: Accepted publickey for core from 10.0.0.1 port 60310 ssh2: RSA SHA256:b8JkJ8STGPpktId2vNwDpv0odk05FCpuAkJstdQTTnk Oct 2 19:33:02.522611 sshd[1166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:33:02.526353 systemd-logind[1084]: New session 2 of user core. Oct 2 19:33:02.527180 systemd[1]: Started session-2.scope. Oct 2 19:33:02.583519 sshd[1166]: pam_unix(sshd:session): session closed for user core Oct 2 19:33:02.586250 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:60310.service: Deactivated successfully. Oct 2 19:33:02.586770 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:33:02.587283 systemd-logind[1084]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:33:02.588446 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:60322.service. Oct 2 19:33:02.589338 systemd-logind[1084]: Removed session 2. Oct 2 19:33:02.623398 sshd[1172]: Accepted publickey for core from 10.0.0.1 port 60322 ssh2: RSA SHA256:b8JkJ8STGPpktId2vNwDpv0odk05FCpuAkJstdQTTnk Oct 2 19:33:02.624846 sshd[1172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:33:02.628241 systemd-logind[1084]: New session 3 of user core. Oct 2 19:33:02.629186 systemd[1]: Started session-3.scope. Oct 2 19:33:02.682301 sshd[1172]: pam_unix(sshd:session): session closed for user core Oct 2 19:33:02.685683 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:60322.service: Deactivated successfully. Oct 2 19:33:02.686303 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:33:02.686868 systemd-logind[1084]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:33:02.688106 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:60338.service. Oct 2 19:33:02.688763 systemd-logind[1084]: Removed session 3. Oct 2 19:33:02.728276 sshd[1178]: Accepted publickey for core from 10.0.0.1 port 60338 ssh2: RSA SHA256:b8JkJ8STGPpktId2vNwDpv0odk05FCpuAkJstdQTTnk Oct 2 19:33:02.729550 sshd[1178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:33:02.732828 systemd-logind[1084]: New session 4 of user core. Oct 2 19:33:02.733576 systemd[1]: Started session-4.scope. Oct 2 19:33:02.785919 sshd[1178]: pam_unix(sshd:session): session closed for user core Oct 2 19:33:02.788200 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:60338.service: Deactivated successfully. Oct 2 19:33:02.788676 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:33:02.789090 systemd-logind[1084]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:33:02.789954 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:60348.service. Oct 2 19:33:02.790581 systemd-logind[1084]: Removed session 4. Oct 2 19:33:02.825395 sshd[1184]: Accepted publickey for core from 10.0.0.1 port 60348 ssh2: RSA SHA256:b8JkJ8STGPpktId2vNwDpv0odk05FCpuAkJstdQTTnk Oct 2 19:33:02.826610 sshd[1184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:33:02.829991 systemd-logind[1084]: New session 5 of user core. Oct 2 19:33:02.830826 systemd[1]: Started session-5.scope. Oct 2 19:33:02.887719 sudo[1187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:33:02.887884 sudo[1187]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:33:02.896753 dbus-daemon[1072]: \xd0\u001d\u001d\xc4\xdcU: received setenforce notice (enforcing=-2058372624) Oct 2 19:33:02.898828 sudo[1187]: pam_unix(sudo:session): session closed for user root Oct 2 19:33:02.900722 sshd[1184]: pam_unix(sshd:session): session closed for user core Oct 2 19:33:02.903552 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:60348.service: Deactivated successfully. Oct 2 19:33:02.904049 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:33:02.904534 systemd-logind[1084]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:33:02.905410 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:60364.service. Oct 2 19:33:02.906216 systemd-logind[1084]: Removed session 5. Oct 2 19:33:02.940126 sshd[1191]: Accepted publickey for core from 10.0.0.1 port 60364 ssh2: RSA SHA256:b8JkJ8STGPpktId2vNwDpv0odk05FCpuAkJstdQTTnk Oct 2 19:33:02.941409 sshd[1191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:33:02.944598 systemd-logind[1084]: New session 6 of user core. Oct 2 19:33:02.945410 systemd[1]: Started session-6.scope. Oct 2 19:33:02.998497 sudo[1195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:33:02.998727 sudo[1195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:33:03.001831 sudo[1195]: pam_unix(sudo:session): session closed for user root Oct 2 19:33:03.007353 sudo[1194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:33:03.007547 sudo[1194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:33:03.015826 systemd[1]: Stopping audit-rules.service... Oct 2 19:33:03.016000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:33:03.017009 auditctl[1198]: No rules Oct 2 19:33:03.017218 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:33:03.017341 systemd[1]: Stopped audit-rules.service. Oct 2 19:33:03.017748 kernel: kauditd_printk_skb: 128 callbacks suppressed Oct 2 19:33:03.017821 kernel: audit: type=1305 audit(1696275183.016:161): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:33:03.018591 systemd[1]: Starting audit-rules.service... Oct 2 19:33:03.016000 audit[1198]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3655bb20 a2=420 a3=0 items=0 ppid=1 pid=1198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:03.021996 kernel: audit: type=1300 audit(1696275183.016:161): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3655bb20 a2=420 a3=0 items=0 ppid=1 pid=1198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:03.022050 kernel: audit: type=1327 audit(1696275183.016:161): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:33:03.016000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:33:03.023088 kernel: audit: type=1131 audit(1696275183.016:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.034902 augenrules[1215]: No rules Oct 2 19:33:03.035418 systemd[1]: Finished audit-rules.service. Oct 2 19:33:03.036323 sudo[1194]: pam_unix(sudo:session): session closed for user root Oct 2 19:33:03.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.073098 sshd[1191]: pam_unix(sshd:session): session closed for user core Oct 2 19:33:03.035000 audit[1194]: USER_END pid=1194 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.077701 kernel: audit: type=1130 audit(1696275183.035:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.077793 kernel: audit: type=1106 audit(1696275183.035:164): pid=1194 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.077820 kernel: audit: type=1104 audit(1696275183.035:165): pid=1194 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.035000 audit[1194]: CRED_DISP pid=1194 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.077081 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:60364.service: Deactivated successfully. Oct 2 19:33:03.077574 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:33:03.078103 systemd-logind[1084]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:33:03.078749 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:60370.service. Oct 2 19:33:03.079304 kernel: audit: type=1106 audit(1696275183.072:166): pid=1191 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:03.072000 audit[1191]: USER_END pid=1191 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:03.079515 systemd-logind[1084]: Removed session 6. Oct 2 19:33:03.072000 audit[1191]: CRED_DISP pid=1191 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:03.084226 kernel: audit: type=1104 audit(1696275183.072:167): pid=1191 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:03.084272 kernel: audit: type=1131 audit(1696275183.075:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.18:22-10.0.0.1:60364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.18:22-10.0.0.1:60364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.18:22-10.0.0.1:60370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.112000 audit[1221]: USER_ACCT pid=1221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:03.113950 sshd[1221]: Accepted publickey for core from 10.0.0.1 port 60370 ssh2: RSA SHA256:b8JkJ8STGPpktId2vNwDpv0odk05FCpuAkJstdQTTnk Oct 2 19:33:03.113000 audit[1221]: CRED_ACQ pid=1221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:03.113000 audit[1221]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5c0e1890 a2=3 a3=0 items=0 ppid=1 pid=1221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:03.113000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:33:03.115017 sshd[1221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:33:03.118166 systemd-logind[1084]: New session 7 of user core. Oct 2 19:33:03.119037 systemd[1]: Started session-7.scope. Oct 2 19:33:03.120000 audit[1221]: USER_START pid=1221 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:03.121000 audit[1223]: CRED_ACQ pid=1223 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:03.169000 audit[1224]: USER_ACCT pid=1224 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.169000 audit[1224]: CRED_REFR pid=1224 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.170356 sudo[1224]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:33:03.170510 sudo[1224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:33:03.171000 audit[1224]: USER_START pid=1224 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.692075 systemd[1]: Reloading. Oct 2 19:33:03.752977 /usr/lib/systemd/system-generators/torcx-generator[1254]: time="2023-10-02T19:33:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:33:03.753003 /usr/lib/systemd/system-generators/torcx-generator[1254]: time="2023-10-02T19:33:03Z" level=info msg="torcx already run" Oct 2 19:33:03.828638 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:33:03.828659 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:33:03.847909 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.899000 audit: BPF prog-id=34 op=LOAD Oct 2 19:33:03.899000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.900000 audit: BPF prog-id=35 op=LOAD Oct 2 19:33:03.900000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit: BPF prog-id=36 op=LOAD Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.902000 audit: BPF prog-id=37 op=LOAD Oct 2 19:33:03.902000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:33:03.902000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit: BPF prog-id=38 op=LOAD Oct 2 19:33:03.903000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit: BPF prog-id=39 op=LOAD Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit: BPF prog-id=40 op=LOAD Oct 2 19:33:03.904000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:33:03.904000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.904000 audit: BPF prog-id=41 op=LOAD Oct 2 19:33:03.904000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit: BPF prog-id=42 op=LOAD Oct 2 19:33:03.905000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.905000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit: BPF prog-id=43 op=LOAD Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit: BPF prog-id=44 op=LOAD Oct 2 19:33:03.906000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:33:03.906000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.906000 audit: BPF prog-id=45 op=LOAD Oct 2 19:33:03.906000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit: BPF prog-id=46 op=LOAD Oct 2 19:33:03.907000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit: BPF prog-id=47 op=LOAD Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:03.907000 audit: BPF prog-id=48 op=LOAD Oct 2 19:33:03.907000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:33:03.907000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:33:03.914349 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:33:03.918772 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:33:03.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.919352 systemd[1]: Reached target network-online.target. Oct 2 19:33:03.920562 systemd[1]: Started kubelet.service. Oct 2 19:33:03.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.928843 systemd[1]: Starting coreos-metadata.service... Oct 2 19:33:03.934876 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:33:03.935057 systemd[1]: Finished coreos-metadata.service. Oct 2 19:33:03.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:03.968242 kubelet[1294]: E1002 19:33:03.968114 1294 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:33:03.970908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:33:03.971060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:33:03.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:33:04.212156 systemd[1]: Stopped kubelet.service. Oct 2 19:33:04.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:04.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:04.227714 systemd[1]: Reloading. Oct 2 19:33:04.275564 /usr/lib/systemd/system-generators/torcx-generator[1360]: time="2023-10-02T19:33:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:33:04.275590 /usr/lib/systemd/system-generators/torcx-generator[1360]: time="2023-10-02T19:33:04Z" level=info msg="torcx already run" Oct 2 19:33:04.334447 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:33:04.334462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:33:04.352678 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit: BPF prog-id=49 op=LOAD Oct 2 19:33:04.402000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.403000 audit: BPF prog-id=50 op=LOAD Oct 2 19:33:04.403000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit: BPF prog-id=51 op=LOAD Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.404000 audit: BPF prog-id=52 op=LOAD Oct 2 19:33:04.404000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:33:04.404000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:33:04.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit: BPF prog-id=53 op=LOAD Oct 2 19:33:04.406000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit: BPF prog-id=54 op=LOAD Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit: BPF prog-id=55 op=LOAD Oct 2 19:33:04.406000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:33:04.406000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.406000 audit: BPF prog-id=56 op=LOAD Oct 2 19:33:04.406000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit: BPF prog-id=57 op=LOAD Oct 2 19:33:04.408000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit: BPF prog-id=58 op=LOAD Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit: BPF prog-id=59 op=LOAD Oct 2 19:33:04.408000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:33:04.408000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.408000 audit: BPF prog-id=60 op=LOAD Oct 2 19:33:04.408000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit: BPF prog-id=61 op=LOAD Oct 2 19:33:04.409000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit: BPF prog-id=62 op=LOAD Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.409000 audit: BPF prog-id=63 op=LOAD Oct 2 19:33:04.409000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:33:04.409000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:33:04.420288 systemd[1]: Started kubelet.service. Oct 2 19:33:04.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:04.460158 kubelet[1401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:33:04.460534 kubelet[1401]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:33:04.460534 kubelet[1401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:33:04.460709 kubelet[1401]: I1002 19:33:04.460577 1401 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:33:04.662271 kubelet[1401]: I1002 19:33:04.662228 1401 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Oct 2 19:33:04.662271 kubelet[1401]: I1002 19:33:04.662262 1401 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:33:04.662491 kubelet[1401]: I1002 19:33:04.662475 1401 server.go:837] "Client rotation is on, will bootstrap in background" Oct 2 19:33:04.664086 kubelet[1401]: I1002 19:33:04.664070 1401 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:33:04.667280 kubelet[1401]: I1002 19:33:04.667269 1401 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:33:04.667460 kubelet[1401]: I1002 19:33:04.667446 1401 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:33:04.667508 kubelet[1401]: I1002 19:33:04.667502 1401 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:33:04.667592 kubelet[1401]: I1002 19:33:04.667527 1401 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:33:04.667592 kubelet[1401]: I1002 19:33:04.667535 1401 container_manager_linux.go:302] "Creating device plugin manager" Oct 2 19:33:04.667642 kubelet[1401]: I1002 19:33:04.667605 1401 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:33:04.673905 kubelet[1401]: I1002 19:33:04.673880 1401 kubelet.go:405] "Attempting to sync node with API server" Oct 2 19:33:04.673905 kubelet[1401]: I1002 19:33:04.673912 1401 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:33:04.674072 kubelet[1401]: I1002 19:33:04.673938 1401 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:33:04.674072 kubelet[1401]: I1002 19:33:04.673966 1401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:33:04.674072 kubelet[1401]: E1002 19:33:04.674021 1401 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:04.674072 kubelet[1401]: E1002 19:33:04.674068 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:04.675542 kubelet[1401]: I1002 19:33:04.674784 1401 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:33:04.675542 kubelet[1401]: W1002 19:33:04.675128 1401 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:33:04.676534 kubelet[1401]: I1002 19:33:04.675656 1401 server.go:1168] "Started kubelet" Oct 2 19:33:04.677143 kubelet[1401]: I1002 19:33:04.676913 1401 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:33:04.677312 kubelet[1401]: I1002 19:33:04.677297 1401 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:33:04.677000 audit[1401]: AVC avc: denied { mac_admin } for pid=1401 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.677000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:33:04.677000 audit[1401]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007dd3e0 a1=c0007da8b8 a2=c0007dd3b0 a3=25 items=0 ppid=1 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.677000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:33:04.677000 audit[1401]: AVC avc: denied { mac_admin } for pid=1401 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:04.677000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:33:04.677000 audit[1401]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00043cd20 a1=c0007da8d0 a2=c0007dd470 a3=25 items=0 ppid=1 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.677000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:33:04.677955 kubelet[1401]: E1002 19:33:04.677572 1401 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:33:04.677955 kubelet[1401]: E1002 19:33:04.677599 1401 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:33:04.677955 kubelet[1401]: I1002 19:33:04.677623 1401 kubelet.go:1355] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:33:04.677955 kubelet[1401]: I1002 19:33:04.677666 1401 kubelet.go:1359] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:33:04.677955 kubelet[1401]: I1002 19:33:04.677735 1401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:33:04.677955 kubelet[1401]: I1002 19:33:04.677939 1401 server.go:461] "Adding debug handlers to kubelet server" Oct 2 19:33:04.679400 kubelet[1401]: I1002 19:33:04.678952 1401 volume_manager.go:284] "Starting Kubelet Volume Manager" Oct 2 19:33:04.679400 kubelet[1401]: I1002 19:33:04.679038 1401 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Oct 2 19:33:04.680957 kubelet[1401]: W1002 19:33:04.680937 1401 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:33:04.681017 kubelet[1401]: E1002 19:33:04.680967 1401 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:33:04.681070 kubelet[1401]: E1002 19:33:04.681051 1401 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.18\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:33:04.681111 kubelet[1401]: W1002 19:33:04.681104 1401 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:33:04.681133 kubelet[1401]: E1002 19:33:04.681116 1401 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:33:04.681207 kubelet[1401]: W1002 19:33:04.681189 1401 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.18" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:33:04.681207 kubelet[1401]: E1002 19:33:04.681204 1401 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.18" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:33:04.681345 kubelet[1401]: E1002 19:33:04.681248 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b0fe6b715", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 675632917, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 675632917, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.682013 kubelet[1401]: E1002 19:33:04.681853 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b10048ca5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 677588133, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 677588133, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.701654 kubelet[1401]: I1002 19:33:04.701612 1401 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:33:04.701654 kubelet[1401]: I1002 19:33:04.701630 1401 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:33:04.701654 kubelet[1401]: I1002 19:33:04.701643 1401 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:33:04.702190 kubelet[1401]: E1002 19:33:04.702106 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b11696980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.18 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700975488, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700975488, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.702865 kubelet[1401]: E1002 19:33:04.702805 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b11697e65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.18 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700980837, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700980837, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.703473 kubelet[1401]: E1002 19:33:04.703422 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b1169892e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.18 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700983598, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700983598, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.706000 audit[1419]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:04.706000 audit[1419]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7ba65430 a2=0 a3=7ffc7ba6541c items=0 ppid=1401 pid=1419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:33:04.708000 audit[1421]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:04.708000 audit[1421]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc3175b3b0 a2=0 a3=7ffc3175b39c items=0 ppid=1401 pid=1421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:33:04.779816 kubelet[1401]: I1002 19:33:04.779765 1401 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.18" Oct 2 19:33:04.780739 kubelet[1401]: E1002 19:33:04.780714 1401 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.18" Oct 2 19:33:04.780828 kubelet[1401]: E1002 19:33:04.780772 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b11696980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.18 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700975488, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 779717472, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.18.178a614b11696980" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.781785 kubelet[1401]: E1002 19:33:04.781693 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b11697e65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.18 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700980837, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 779726722, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.18.178a614b11697e65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.782534 kubelet[1401]: E1002 19:33:04.782462 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b1169892e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.18 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700983598, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 779731880, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.18.178a614b1169892e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.710000 audit[1423]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:04.710000 audit[1423]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff5dfa3ef0 a2=0 a3=7fff5dfa3edc items=0 ppid=1401 pid=1423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:33:04.807000 audit[1428]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:04.807000 audit[1428]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffde9d999e0 a2=0 a3=7ffde9d999cc items=0 ppid=1401 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:33:04.846000 audit[1433]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:04.846000 audit[1433]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd2eda0d70 a2=0 a3=7ffd2eda0d5c items=0 ppid=1401 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:33:04.847096 kubelet[1401]: I1002 19:33:04.847022 1401 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:33:04.847000 audit[1435]: NETFILTER_CFG table=mangle:7 family=2 entries=1 op=nft_register_chain pid=1435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:04.847000 audit[1435]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd10a73c30 a2=0 a3=7ffd10a73c1c items=0 ppid=1401 pid=1435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:33:04.847000 audit[1434]: NETFILTER_CFG table=mangle:8 family=10 entries=2 op=nft_register_chain pid=1434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:04.847000 audit[1434]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc4cf766a0 a2=0 a3=7ffc4cf7668c items=0 ppid=1401 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:33:04.848304 kubelet[1401]: I1002 19:33:04.848273 1401 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:33:04.848430 kubelet[1401]: I1002 19:33:04.848316 1401 status_manager.go:207] "Starting to sync pod status with apiserver" Oct 2 19:33:04.848430 kubelet[1401]: I1002 19:33:04.848340 1401 kubelet.go:2257] "Starting kubelet main sync loop" Oct 2 19:33:04.848430 kubelet[1401]: E1002 19:33:04.848398 1401 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 2 19:33:04.848000 audit[1436]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:04.848000 audit[1436]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffececc0c60 a2=0 a3=7ffececc0c4c items=0 ppid=1401 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:33:04.848000 audit[1437]: NETFILTER_CFG table=mangle:10 family=10 entries=1 op=nft_register_chain pid=1437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:04.848000 audit[1437]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf3dc2960 a2=0 a3=7ffcf3dc294c items=0 ppid=1401 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:33:04.849656 kubelet[1401]: W1002 19:33:04.849558 1401 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:33:04.849656 kubelet[1401]: E1002 19:33:04.849581 1401 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:33:04.849000 audit[1438]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:04.849000 audit[1438]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6103de10 a2=0 a3=7ffd6103ddfc items=0 ppid=1401 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:33:04.849000 audit[1439]: NETFILTER_CFG table=nat:12 family=10 entries=2 op=nft_register_chain pid=1439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:04.849000 audit[1439]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffccee41670 a2=0 a3=7ffccee4165c items=0 ppid=1401 pid=1439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:33:04.850000 audit[1440]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:04.850000 audit[1440]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeeec6a840 a2=0 a3=7ffeeec6a82c items=0 ppid=1401 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:04.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:33:04.872154 kubelet[1401]: I1002 19:33:04.872115 1401 policy_none.go:49] "None policy: Start" Oct 2 19:33:04.872837 kubelet[1401]: I1002 19:33:04.872817 1401 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:33:04.872837 kubelet[1401]: I1002 19:33:04.872839 1401 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:33:04.883158 kubelet[1401]: E1002 19:33:04.883135 1401 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.18\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:33:04.948699 kubelet[1401]: E1002 19:33:04.948572 1401 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 2 19:33:04.981653 kubelet[1401]: I1002 19:33:04.981612 1401 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.18" Oct 2 19:33:04.982911 kubelet[1401]: E1002 19:33:04.982818 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b11696980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.18 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700975488, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 981585089, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.18.178a614b11696980" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.983059 kubelet[1401]: E1002 19:33:04.982982 1401 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.18" Oct 2 19:33:04.983633 kubelet[1401]: E1002 19:33:04.983552 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b11697e65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.18 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700980837, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 981589634, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.18.178a614b11697e65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:04.984340 kubelet[1401]: E1002 19:33:04.984296 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b1169892e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.18 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700983598, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 981592314, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.18.178a614b1169892e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:05.077179 systemd[1]: Created slice kubepods.slice. Oct 2 19:33:05.085588 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:33:05.088856 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:33:05.098173 kubelet[1401]: I1002 19:33:05.098119 1401 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:33:05.096000 audit[1401]: AVC avc: denied { mac_admin } for pid=1401 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:05.096000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:33:05.096000 audit[1401]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0010c5710 a1=c0010e6ae0 a2=c0010c56e0 a3=25 items=0 ppid=1 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:05.096000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:33:05.098481 kubelet[1401]: I1002 19:33:05.098197 1401 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:33:05.098481 kubelet[1401]: I1002 19:33:05.098379 1401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:33:05.099898 kubelet[1401]: E1002 19:33:05.099876 1401 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.18\" not found" Oct 2 19:33:05.101543 kubelet[1401]: E1002 19:33:05.101414 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b2933126c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 5, 100067436, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 5, 100067436, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:05.285170 kubelet[1401]: E1002 19:33:05.285124 1401 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.18\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:33:05.384758 kubelet[1401]: I1002 19:33:05.384712 1401 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.18" Oct 2 19:33:05.386085 kubelet[1401]: E1002 19:33:05.386049 1401 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.18" Oct 2 19:33:05.386247 kubelet[1401]: E1002 19:33:05.386131 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b11696980", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.18 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700975488, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 5, 384613295, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.18.178a614b11696980" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:05.387156 kubelet[1401]: E1002 19:33:05.387072 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b11697e65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.18 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700980837, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 5, 384642081, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.18.178a614b11697e65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:05.387927 kubelet[1401]: E1002 19:33:05.387830 1401 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.18.178a614b1169892e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.18", UID:"10.0.0.18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.18 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.18"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 33, 4, 700983598, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 33, 5, 384652784, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.18.178a614b1169892e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:33:05.552992 kubelet[1401]: W1002 19:33:05.552887 1401 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.18" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:33:05.552992 kubelet[1401]: E1002 19:33:05.552928 1401 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.18" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:33:05.664821 kubelet[1401]: I1002 19:33:05.664759 1401 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:33:05.675101 kubelet[1401]: E1002 19:33:05.675035 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:06.029803 kubelet[1401]: E1002 19:33:06.029702 1401 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.18" not found Oct 2 19:33:06.089182 kubelet[1401]: E1002 19:33:06.089141 1401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.18\" not found" node="10.0.0.18" Oct 2 19:33:06.187567 kubelet[1401]: I1002 19:33:06.187543 1401 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.18" Oct 2 19:33:06.190455 kubelet[1401]: I1002 19:33:06.190438 1401 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.18" Oct 2 19:33:06.232036 kubelet[1401]: E1002 19:33:06.231999 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:06.333122 kubelet[1401]: E1002 19:33:06.333060 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:06.432202 sudo[1224]: pam_unix(sudo:session): session closed for user root Oct 2 19:33:06.431000 audit[1224]: USER_END pid=1224 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:33:06.431000 audit[1224]: CRED_DISP pid=1224 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:33:06.433663 sshd[1221]: pam_unix(sshd:session): session closed for user core Oct 2 19:33:06.434587 kubelet[1401]: E1002 19:33:06.434526 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:06.434000 audit[1221]: USER_END pid=1221 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:06.434000 audit[1221]: CRED_DISP pid=1221 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:33:06.437193 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:60370.service: Deactivated successfully. Oct 2 19:33:06.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.18:22-10.0.0.1:60370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:33:06.438506 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:33:06.439891 systemd-logind[1084]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:33:06.441395 systemd-logind[1084]: Removed session 7. Oct 2 19:33:06.534724 kubelet[1401]: E1002 19:33:06.534641 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:06.635873 kubelet[1401]: E1002 19:33:06.635735 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:06.675365 kubelet[1401]: E1002 19:33:06.675323 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:06.736304 kubelet[1401]: E1002 19:33:06.736251 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:06.837419 kubelet[1401]: E1002 19:33:06.837330 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:06.937553 kubelet[1401]: E1002 19:33:06.937425 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:07.038388 kubelet[1401]: E1002 19:33:07.038282 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:07.139542 kubelet[1401]: E1002 19:33:07.139425 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:07.239713 kubelet[1401]: E1002 19:33:07.239565 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:07.340653 kubelet[1401]: E1002 19:33:07.340567 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:07.441576 kubelet[1401]: E1002 19:33:07.441497 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:07.542252 kubelet[1401]: E1002 19:33:07.542204 1401 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.18\" not found" Oct 2 19:33:07.643917 kubelet[1401]: I1002 19:33:07.643889 1401 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:33:07.644339 env[1101]: time="2023-10-02T19:33:07.644289248Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:33:07.644555 kubelet[1401]: I1002 19:33:07.644476 1401 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:33:07.676481 kubelet[1401]: E1002 19:33:07.676437 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:07.676481 kubelet[1401]: I1002 19:33:07.676460 1401 apiserver.go:52] "Watching apiserver" Oct 2 19:33:07.678719 kubelet[1401]: I1002 19:33:07.678677 1401 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:33:07.678785 kubelet[1401]: I1002 19:33:07.678780 1401 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:33:07.679544 kubelet[1401]: I1002 19:33:07.679527 1401 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Oct 2 19:33:07.683367 systemd[1]: Created slice kubepods-burstable-pod34c082fd_1c33_496d_be5a_a58a734e36df.slice. Oct 2 19:33:07.696321 systemd[1]: Created slice kubepods-besteffort-podb6c69dba_b5f3_4722_a1ca_b5db63174d43.slice. Oct 2 19:33:07.699358 kubelet[1401]: I1002 19:33:07.699324 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34c082fd-1c33-496d-be5a-a58a734e36df-hubble-tls\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699358 kubelet[1401]: I1002 19:33:07.699359 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b6c69dba-b5f3-4722-a1ca-b5db63174d43-kube-proxy\") pod \"kube-proxy-vfl5t\" (UID: \"b6c69dba-b5f3-4722-a1ca-b5db63174d43\") " pod="kube-system/kube-proxy-vfl5t" Oct 2 19:33:07.699589 kubelet[1401]: I1002 19:33:07.699386 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-hostproc\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699589 kubelet[1401]: I1002 19:33:07.699406 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-lib-modules\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699589 kubelet[1401]: I1002 19:33:07.699427 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-host-proc-sys-net\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699589 kubelet[1401]: I1002 19:33:07.699523 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-host-proc-sys-kernel\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699589 kubelet[1401]: I1002 19:33:07.699587 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-cgroup\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699745 kubelet[1401]: I1002 19:33:07.699615 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cni-path\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699745 kubelet[1401]: I1002 19:33:07.699633 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-xtables-lock\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699745 kubelet[1401]: I1002 19:33:07.699704 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6c69dba-b5f3-4722-a1ca-b5db63174d43-lib-modules\") pod \"kube-proxy-vfl5t\" (UID: \"b6c69dba-b5f3-4722-a1ca-b5db63174d43\") " pod="kube-system/kube-proxy-vfl5t" Oct 2 19:33:07.699745 kubelet[1401]: I1002 19:33:07.699727 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-run\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699881 kubelet[1401]: I1002 19:33:07.699783 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-etc-cni-netd\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699881 kubelet[1401]: I1002 19:33:07.699826 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-config-path\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.699881 kubelet[1401]: I1002 19:33:07.699873 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6c69dba-b5f3-4722-a1ca-b5db63174d43-xtables-lock\") pod \"kube-proxy-vfl5t\" (UID: \"b6c69dba-b5f3-4722-a1ca-b5db63174d43\") " pod="kube-system/kube-proxy-vfl5t" Oct 2 19:33:07.699971 kubelet[1401]: I1002 19:33:07.699917 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfxds\" (UniqueName: \"kubernetes.io/projected/b6c69dba-b5f3-4722-a1ca-b5db63174d43-kube-api-access-kfxds\") pod \"kube-proxy-vfl5t\" (UID: \"b6c69dba-b5f3-4722-a1ca-b5db63174d43\") " pod="kube-system/kube-proxy-vfl5t" Oct 2 19:33:07.699971 kubelet[1401]: I1002 19:33:07.699944 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-bpf-maps\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.700036 kubelet[1401]: I1002 19:33:07.700007 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34c082fd-1c33-496d-be5a-a58a734e36df-clustermesh-secrets\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.700066 kubelet[1401]: I1002 19:33:07.700043 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kl42\" (UniqueName: \"kubernetes.io/projected/34c082fd-1c33-496d-be5a-a58a734e36df-kube-api-access-6kl42\") pod \"cilium-55wwm\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " pod="kube-system/cilium-55wwm" Oct 2 19:33:07.700101 kubelet[1401]: I1002 19:33:07.700070 1401 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:33:07.995722 kubelet[1401]: E1002 19:33:07.995603 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:07.996569 env[1101]: time="2023-10-02T19:33:07.996504926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-55wwm,Uid:34c082fd-1c33-496d-be5a-a58a734e36df,Namespace:kube-system,Attempt:0,}" Oct 2 19:33:08.004754 kubelet[1401]: E1002 19:33:08.004735 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:08.005135 env[1101]: time="2023-10-02T19:33:08.005100596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vfl5t,Uid:b6c69dba-b5f3-4722-a1ca-b5db63174d43,Namespace:kube-system,Attempt:0,}" Oct 2 19:33:08.677250 kubelet[1401]: E1002 19:33:08.677186 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:08.697672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3840748741.mount: Deactivated successfully. Oct 2 19:33:08.705148 env[1101]: time="2023-10-02T19:33:08.705109647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:08.706070 env[1101]: time="2023-10-02T19:33:08.706032535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:08.709063 env[1101]: time="2023-10-02T19:33:08.709026366Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:08.710115 env[1101]: time="2023-10-02T19:33:08.710088998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:08.711532 env[1101]: time="2023-10-02T19:33:08.711481598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:08.712847 env[1101]: time="2023-10-02T19:33:08.712819656Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:08.714975 env[1101]: time="2023-10-02T19:33:08.714942347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:08.716377 env[1101]: time="2023-10-02T19:33:08.716343321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:08.737633 env[1101]: time="2023-10-02T19:33:08.737532590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:33:08.737736 env[1101]: time="2023-10-02T19:33:08.737605337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:33:08.737736 env[1101]: time="2023-10-02T19:33:08.737628044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:33:08.737853 env[1101]: time="2023-10-02T19:33:08.737769506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156 pid=1462 runtime=io.containerd.runc.v2 Oct 2 19:33:08.738166 env[1101]: time="2023-10-02T19:33:08.738112603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:33:08.739373 env[1101]: time="2023-10-02T19:33:08.738144840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:33:08.739443 env[1101]: time="2023-10-02T19:33:08.739403707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:33:08.740122 env[1101]: time="2023-10-02T19:33:08.740069957Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72b462f2b54a06adbc7d199093a8f6e12159f5e9973d64a67be7a828217660ff pid=1461 runtime=io.containerd.runc.v2 Oct 2 19:33:08.752891 systemd[1]: Started cri-containerd-3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156.scope. Oct 2 19:33:08.761336 systemd[1]: Started cri-containerd-72b462f2b54a06adbc7d199093a8f6e12159f5e9973d64a67be7a828217660ff.scope. Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885298 kernel: kauditd_printk_skb: 416 callbacks suppressed Oct 2 19:33:08.885355 kernel: audit: type=1400 audit(1696275188.882:550): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885376 kernel: audit: type=1400 audit(1696275188.882:551): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.894655 kernel: audit: type=1400 audit(1696275188.882:552): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.894743 kernel: audit: type=1400 audit(1696275188.882:553): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.894759 kernel: audit: type=1400 audit(1696275188.882:554): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.894773 kernel: audit: type=1400 audit(1696275188.882:555): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.894804 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.897078 kernel: audit: type=1400 audit(1696275188.882:556): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.897111 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:33:08.897134 kernel: audit: type=1400 audit(1696275188.882:557): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.882000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.882000 audit: BPF prog-id=64 op=LOAD Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1461 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623436326632623534613036616462633764313939303933613866 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1461 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623436326632623534613036616462633764313939303933613866 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.885000 audit: BPF prog-id=65 op=LOAD Oct 2 19:33:08.885000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00032b410 items=0 ppid=1461 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623436326632623534613036616462633764313939303933613866 Oct 2 19:33:08.886000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.886000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.886000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.886000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.886000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.886000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.886000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.886000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.886000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.886000 audit: BPF prog-id=66 op=LOAD Oct 2 19:33:08.886000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00032b458 items=0 ppid=1461 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623436326632623534613036616462633764313939303933613866 Oct 2 19:33:08.888000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:33:08.888000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { perfmon } for pid=1484 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.889000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.889000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.889000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.889000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.889000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.889000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.889000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.889000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.889000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit[1484]: AVC avc: denied { bpf } for pid=1484 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.888000 audit: BPF prog-id=67 op=LOAD Oct 2 19:33:08.888000 audit[1484]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00032b868 items=0 ppid=1461 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623436326632623534613036616462633764313939303933613866 Oct 2 19:33:08.897000 audit: BPF prog-id=68 op=LOAD Oct 2 19:33:08.897000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.897000 audit[1482]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1462 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365316565653730613134346238353237613336643533383861646531 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1462 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365316565653730613134346238353237613336643533383861646531 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit: BPF prog-id=69 op=LOAD Oct 2 19:33:08.900000 audit[1482]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00020a810 items=0 ppid=1462 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365316565653730613134346238353237613336643533383861646531 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit: BPF prog-id=70 op=LOAD Oct 2 19:33:08.900000 audit[1482]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00020a858 items=0 ppid=1462 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365316565653730613134346238353237613336643533383861646531 Oct 2 19:33:08.900000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:33:08.900000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { perfmon } for pid=1482 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit[1482]: AVC avc: denied { bpf } for pid=1482 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:08.900000 audit: BPF prog-id=71 op=LOAD Oct 2 19:33:08.900000 audit[1482]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00020ac68 items=0 ppid=1462 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:08.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365316565653730613134346238353237613336643533383861646531 Oct 2 19:33:08.911349 env[1101]: time="2023-10-02T19:33:08.911305193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vfl5t,Uid:b6c69dba-b5f3-4722-a1ca-b5db63174d43,Namespace:kube-system,Attempt:0,} returns sandbox id \"72b462f2b54a06adbc7d199093a8f6e12159f5e9973d64a67be7a828217660ff\"" Oct 2 19:33:08.912387 kubelet[1401]: E1002 19:33:08.912358 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:08.912463 env[1101]: time="2023-10-02T19:33:08.912361191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-55wwm,Uid:34c082fd-1c33-496d-be5a-a58a734e36df,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\"" Oct 2 19:33:08.913323 env[1101]: time="2023-10-02T19:33:08.913296664Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\"" Oct 2 19:33:08.913794 kubelet[1401]: E1002 19:33:08.913777 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:09.677882 kubelet[1401]: E1002 19:33:09.677839 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:09.921993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949226483.mount: Deactivated successfully. Oct 2 19:33:10.678412 kubelet[1401]: E1002 19:33:10.678365 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:11.679231 kubelet[1401]: E1002 19:33:11.679181 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:12.035761 env[1101]: time="2023-10-02T19:33:12.035692024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:12.038068 env[1101]: time="2023-10-02T19:33:12.038021538Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:12.040582 env[1101]: time="2023-10-02T19:33:12.040550001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:12.041804 env[1101]: time="2023-10-02T19:33:12.041775495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8e9eff2f6d0b398f9ac5f5a15c1cb7d5f468f28d64a78d593d57f72a969a54ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:12.042176 env[1101]: time="2023-10-02T19:33:12.042143197Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\" returns image reference \"sha256:ec57bbfaaae73ecc3c12f05d5ae974468cc0ef356dee588cd15fd471815c7985\"" Oct 2 19:33:12.042967 env[1101]: time="2023-10-02T19:33:12.042934415Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:33:12.044385 env[1101]: time="2023-10-02T19:33:12.044348046Z" level=info msg="CreateContainer within sandbox \"72b462f2b54a06adbc7d199093a8f6e12159f5e9973d64a67be7a828217660ff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:33:12.408722 env[1101]: time="2023-10-02T19:33:12.408485746Z" level=info msg="CreateContainer within sandbox \"72b462f2b54a06adbc7d199093a8f6e12159f5e9973d64a67be7a828217660ff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3526842358d529e79b06cc030bcdd21beca39e17684f269b2295998619bcea7\"" Oct 2 19:33:12.409852 env[1101]: time="2023-10-02T19:33:12.409817796Z" level=info msg="StartContainer for \"e3526842358d529e79b06cc030bcdd21beca39e17684f269b2295998619bcea7\"" Oct 2 19:33:12.464269 systemd[1]: Started cri-containerd-e3526842358d529e79b06cc030bcdd21beca39e17684f269b2295998619bcea7.scope. Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1461 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533353236383432333538643532396537396230366363303330626364 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit: BPF prog-id=72 op=LOAD Oct 2 19:33:12.489000 audit[1539]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0003246c0 items=0 ppid=1461 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533353236383432333538643532396537396230366363303330626364 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.489000 audit: BPF prog-id=73 op=LOAD Oct 2 19:33:12.489000 audit[1539]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c000324708 items=0 ppid=1461 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533353236383432333538643532396537396230366363303330626364 Oct 2 19:33:12.489000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:33:12.490000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { perfmon } for pid=1539 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit[1539]: AVC avc: denied { bpf } for pid=1539 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:33:12.490000 audit: BPF prog-id=74 op=LOAD Oct 2 19:33:12.490000 audit[1539]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c000324798 items=0 ppid=1461 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533353236383432333538643532396537396230366363303330626364 Oct 2 19:33:12.546000 audit[1589]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.546000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7de715b0 a2=0 a3=7ffe7de7159c items=0 ppid=1549 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:33:12.546000 audit[1590]: NETFILTER_CFG table=mangle:15 family=10 entries=1 op=nft_register_chain pid=1590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.546000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff16a17400 a2=0 a3=7fff16a173ec items=0 ppid=1549 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.546000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:33:12.547000 audit[1591]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_chain pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.547000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd10085b40 a2=0 a3=7ffd10085b2c items=0 ppid=1549 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:33:12.548000 audit[1592]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_chain pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.548000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff32894d30 a2=0 a3=7fff32894d1c items=0 ppid=1549 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:33:12.549000 audit[1593]: NETFILTER_CFG table=nat:18 family=10 entries=1 op=nft_register_chain pid=1593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.549000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1d112860 a2=0 a3=7ffd1d11284c items=0 ppid=1549 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:33:12.549000 audit[1594]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1594 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.549000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedc68b890 a2=0 a3=7ffedc68b87c items=0 ppid=1549 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:33:12.649000 audit[1595]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.649000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffebbf4a260 a2=0 a3=7ffebbf4a24c items=0 ppid=1549 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:33:12.651000 audit[1597]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.651000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdfda78530 a2=0 a3=7ffdfda7851c items=0 ppid=1549 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:33:12.653000 audit[1600]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.653000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff7b9a9bc0 a2=0 a3=7fff7b9a9bac items=0 ppid=1549 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:33:12.654000 audit[1601]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.654000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeede27e00 a2=0 a3=7ffeede27dec items=0 ppid=1549 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:33:12.655000 audit[1603]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.655000 audit[1603]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1c9186a0 a2=0 a3=7fff1c91868c items=0 ppid=1549 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.655000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:33:12.656000 audit[1604]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.656000 audit[1604]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc289a4e30 a2=0 a3=7ffc289a4e1c items=0 ppid=1549 pid=1604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:33:12.658000 audit[1606]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.658000 audit[1606]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe5ce89e20 a2=0 a3=7ffe5ce89e0c items=0 ppid=1549 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.658000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:33:12.661000 audit[1609]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.661000 audit[1609]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe20385550 a2=0 a3=7ffe2038553c items=0 ppid=1549 pid=1609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:33:12.662000 audit[1610]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.662000 audit[1610]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0efe0bd0 a2=0 a3=7ffd0efe0bbc items=0 ppid=1549 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:33:12.663000 audit[1612]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.663000 audit[1612]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef34c3240 a2=0 a3=7ffef34c322c items=0 ppid=1549 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:33:12.664000 audit[1613]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.664000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc71f349a0 a2=0 a3=7ffc71f3498c items=0 ppid=1549 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:33:12.666000 audit[1615]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.666000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc26fbc190 a2=0 a3=7ffc26fbc17c items=0 ppid=1549 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:33:12.669000 audit[1618]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.669000 audit[1618]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffba4a8400 a2=0 a3=7fffba4a83ec items=0 ppid=1549 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:33:12.671000 audit[1621]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.671000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd808bbd30 a2=0 a3=7ffd808bbd1c items=0 ppid=1549 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:33:12.672000 audit[1622]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.672000 audit[1622]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1006d280 a2=0 a3=7ffd1006d26c items=0 ppid=1549 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:33:12.680207 kubelet[1401]: E1002 19:33:12.680182 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:12.674000 audit[1624]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1624 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.674000 audit[1624]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffcea58bf0 a2=0 a3=7fffcea58bdc items=0 ppid=1549 pid=1624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.674000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:33:12.819000 audit[1629]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.819000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc0f8c4150 a2=0 a3=7ffc0f8c413c items=0 ppid=1549 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:33:12.823000 audit[1634]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.823000 audit[1634]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfd7bd610 a2=0 a3=7ffcfd7bd5fc items=0 ppid=1549 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:33:12.824000 audit[1636]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:33:12.824000 audit[1636]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcaf454530 a2=0 a3=7ffcaf45451c items=0 ppid=1549 pid=1636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:33:12.832000 audit[1638]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:33:12.832000 audit[1638]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffdbdcfde30 a2=0 a3=7ffdbdcfde1c items=0 ppid=1549 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:33:12.847000 audit[1638]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1638 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:33:12.847000 audit[1638]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffdbdcfde30 a2=0 a3=7ffdbdcfde1c items=0 ppid=1549 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:33:12.849000 audit[1644]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1644 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.849000 audit[1644]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc9b968170 a2=0 a3=7ffc9b96815c items=0 ppid=1549 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:33:12.851000 audit[1646]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1646 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.851000 audit[1646]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffda143e9a0 a2=0 a3=7ffda143e98c items=0 ppid=1549 pid=1646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:33:12.854000 audit[1649]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.854000 audit[1649]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe16711700 a2=0 a3=7ffe167116ec items=0 ppid=1549 pid=1649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:33:12.855000 audit[1650]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1650 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.855000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe563b6de0 a2=0 a3=7ffe563b6dcc items=0 ppid=1549 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:33:12.857000 audit[1652]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1652 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.857000 audit[1652]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe57b0f300 a2=0 a3=7ffe57b0f2ec items=0 ppid=1549 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:33:12.857000 audit[1653]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1653 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.857000 audit[1653]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed468c6b0 a2=0 a3=7ffed468c69c items=0 ppid=1549 pid=1653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:33:12.859000 audit[1655]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1655 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.859000 audit[1655]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf5331200 a2=0 a3=7ffdf53311ec items=0 ppid=1549 pid=1655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:33:12.861000 audit[1658]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1658 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.861000 audit[1658]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff7fd1ed30 a2=0 a3=7fff7fd1ed1c items=0 ppid=1549 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:33:12.862000 audit[1659]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1659 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.862000 audit[1659]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd30608a20 a2=0 a3=7ffd30608a0c items=0 ppid=1549 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:33:12.864000 audit[1661]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1661 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.864000 audit[1661]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffecb8eb9a0 a2=0 a3=7ffecb8eb98c items=0 ppid=1549 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:33:12.865000 audit[1662]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.865000 audit[1662]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc09f951a0 a2=0 a3=7ffc09f9518c items=0 ppid=1549 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:33:12.866000 audit[1664]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1664 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.866000 audit[1664]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3b2e8c60 a2=0 a3=7ffc3b2e8c4c items=0 ppid=1549 pid=1664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:33:12.871000 audit[1667]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1667 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.871000 audit[1667]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0a7600e0 a2=0 a3=7ffe0a7600cc items=0 ppid=1549 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:33:12.873000 audit[1670]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1670 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.873000 audit[1670]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc63b8d30 a2=0 a3=7ffcc63b8d1c items=0 ppid=1549 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.873000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:33:12.874000 audit[1671]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.874000 audit[1671]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe141ed4f0 a2=0 a3=7ffe141ed4dc items=0 ppid=1549 pid=1671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:33:12.877276 env[1101]: time="2023-10-02T19:33:12.876187208Z" level=info msg="StartContainer for \"e3526842358d529e79b06cc030bcdd21beca39e17684f269b2295998619bcea7\" returns successfully" Oct 2 19:33:12.876000 audit[1673]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1673 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.876000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff80ec54c0 a2=0 a3=7fff80ec54ac items=0 ppid=1549 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:33:12.879000 audit[1676]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1676 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.879000 audit[1676]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffeeba60570 a2=0 a3=7ffeeba6055c items=0 ppid=1549 pid=1676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:33:12.880000 audit[1677]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1677 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.880000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0e24c120 a2=0 a3=7ffd0e24c10c items=0 ppid=1549 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:33:12.881000 audit[1679]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_rule pid=1679 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.881000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe367b1a60 a2=0 a3=7ffe367b1a4c items=0 ppid=1549 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:33:12.884000 audit[1682]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_rule pid=1682 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.884000 audit[1682]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd4d8ec980 a2=0 a3=7ffd4d8ec96c items=0 ppid=1549 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:33:12.884000 audit[1683]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=1683 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.884000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc410c0d60 a2=0 a3=7ffc410c0d4c items=0 ppid=1549 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:33:12.886000 audit[1685]: NETFILTER_CFG table=nat:62 family=10 entries=2 op=nft_register_chain pid=1685 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:33:12.886000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdf258adb0 a2=0 a3=7ffdf258ad9c items=0 ppid=1549 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:33:12.888000 audit[1687]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1687 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:33:12.888000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffb4596250 a2=0 a3=7fffb459623c items=0 ppid=1549 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.888000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:33:12.888000 audit[1687]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1687 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:33:12.888000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7fffb4596250 a2=0 a3=7fffb459623c items=0 ppid=1549 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:33:12.888000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:33:13.125636 systemd[1]: run-containerd-runc-k8s.io-e3526842358d529e79b06cc030bcdd21beca39e17684f269b2295998619bcea7-runc.02nEii.mount: Deactivated successfully. Oct 2 19:33:13.680958 kubelet[1401]: E1002 19:33:13.680912 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:13.879555 kubelet[1401]: E1002 19:33:13.879529 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:13.887266 kubelet[1401]: I1002 19:33:13.887238 1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vfl5t" podStartSLOduration=4.757344921 podCreationTimestamp="2023-10-02 19:33:06 +0000 UTC" firstStartedPulling="2023-10-02 19:33:08.912823454 +0000 UTC m=+4.489881361" lastFinishedPulling="2023-10-02 19:33:12.042675679 +0000 UTC m=+7.619733576" observedRunningTime="2023-10-02 19:33:13.886353343 +0000 UTC m=+9.463411250" watchObservedRunningTime="2023-10-02 19:33:13.887197136 +0000 UTC m=+9.464255043" Oct 2 19:33:14.681326 kubelet[1401]: E1002 19:33:14.681246 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:14.884062 kubelet[1401]: E1002 19:33:14.884028 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:15.681622 kubelet[1401]: E1002 19:33:15.681542 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:16.682491 kubelet[1401]: E1002 19:33:16.682427 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:17.682927 kubelet[1401]: E1002 19:33:17.682856 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:18.683786 kubelet[1401]: E1002 19:33:18.683740 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:19.268875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267182703.mount: Deactivated successfully. Oct 2 19:33:19.684313 kubelet[1401]: E1002 19:33:19.684267 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:20.685250 kubelet[1401]: E1002 19:33:20.685192 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:21.685972 kubelet[1401]: E1002 19:33:21.685922 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:22.686175 kubelet[1401]: E1002 19:33:22.686134 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:23.637750 env[1101]: time="2023-10-02T19:33:23.637667778Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:23.639493 env[1101]: time="2023-10-02T19:33:23.639451541Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:23.641153 env[1101]: time="2023-10-02T19:33:23.641113971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:33:23.641733 env[1101]: time="2023-10-02T19:33:23.641682372Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 19:33:23.643610 env[1101]: time="2023-10-02T19:33:23.643569955Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:33:23.657064 env[1101]: time="2023-10-02T19:33:23.657006798Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\"" Oct 2 19:33:23.657630 env[1101]: time="2023-10-02T19:33:23.657586905Z" level=info msg="StartContainer for \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\"" Oct 2 19:33:23.683181 systemd[1]: Started cri-containerd-6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462.scope. Oct 2 19:33:23.687189 kubelet[1401]: E1002 19:33:23.687169 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:23.700877 systemd[1]: cri-containerd-6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462.scope: Deactivated successfully. Oct 2 19:33:23.704031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462-rootfs.mount: Deactivated successfully. Oct 2 19:33:24.307913 env[1101]: time="2023-10-02T19:33:24.307834641Z" level=info msg="shim disconnected" id=6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462 Oct 2 19:33:24.307913 env[1101]: time="2023-10-02T19:33:24.307882189Z" level=warning msg="cleaning up after shim disconnected" id=6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462 namespace=k8s.io Oct 2 19:33:24.307913 env[1101]: time="2023-10-02T19:33:24.307890247Z" level=info msg="cleaning up dead shim" Oct 2 19:33:24.315935 env[1101]: time="2023-10-02T19:33:24.315870524Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1712 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:33:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:33:24.316310 env[1101]: time="2023-10-02T19:33:24.316206125Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:33:24.316526 env[1101]: time="2023-10-02T19:33:24.316443192Z" level=error msg="Failed to pipe stderr of container \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\"" error="reading from a closed fifo" Oct 2 19:33:24.317603 env[1101]: time="2023-10-02T19:33:24.317567781Z" level=error msg="Failed to pipe stdout of container \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\"" error="reading from a closed fifo" Oct 2 19:33:24.320080 env[1101]: time="2023-10-02T19:33:24.320033018Z" level=error msg="StartContainer for \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:33:24.320338 kubelet[1401]: E1002 19:33:24.320307 1401 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462" Oct 2 19:33:24.320470 kubelet[1401]: E1002 19:33:24.320446 1401 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:33:24.320470 kubelet[1401]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:33:24.320470 kubelet[1401]: rm /hostbin/cilium-mount Oct 2 19:33:24.320565 kubelet[1401]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6kl42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:33:24.320565 kubelet[1401]: E1002 19:33:24.320491 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:33:24.674524 kubelet[1401]: E1002 19:33:24.674371 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:24.687857 kubelet[1401]: E1002 19:33:24.687835 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:24.900636 kubelet[1401]: E1002 19:33:24.900603 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:24.902355 env[1101]: time="2023-10-02T19:33:24.902309396Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:33:24.918624 env[1101]: time="2023-10-02T19:33:24.918563794Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\"" Oct 2 19:33:24.919162 env[1101]: time="2023-10-02T19:33:24.919132669Z" level=info msg="StartContainer for \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\"" Oct 2 19:33:24.937438 systemd[1]: Started cri-containerd-6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced.scope. Oct 2 19:33:24.969181 systemd[1]: cri-containerd-6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced.scope: Deactivated successfully. Oct 2 19:33:25.062131 env[1101]: time="2023-10-02T19:33:25.062045852Z" level=info msg="shim disconnected" id=6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced Oct 2 19:33:25.062131 env[1101]: time="2023-10-02T19:33:25.062131618Z" level=warning msg="cleaning up after shim disconnected" id=6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced namespace=k8s.io Oct 2 19:33:25.062390 env[1101]: time="2023-10-02T19:33:25.062142132Z" level=info msg="cleaning up dead shim" Oct 2 19:33:25.069918 env[1101]: time="2023-10-02T19:33:25.069869015Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1748 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:33:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:33:25.070155 env[1101]: time="2023-10-02T19:33:25.070096443Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:33:25.070320 env[1101]: time="2023-10-02T19:33:25.070259405Z" level=error msg="Failed to pipe stdout of container \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\"" error="reading from a closed fifo" Oct 2 19:33:25.070320 env[1101]: time="2023-10-02T19:33:25.070310330Z" level=error msg="Failed to pipe stderr of container \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\"" error="reading from a closed fifo" Oct 2 19:33:25.072507 env[1101]: time="2023-10-02T19:33:25.072469672Z" level=error msg="StartContainer for \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:33:25.072771 kubelet[1401]: E1002 19:33:25.072741 1401 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced" Oct 2 19:33:25.072899 kubelet[1401]: E1002 19:33:25.072858 1401 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:33:25.072899 kubelet[1401]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:33:25.072899 kubelet[1401]: rm /hostbin/cilium-mount Oct 2 19:33:25.072899 kubelet[1401]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6kl42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:33:25.072899 kubelet[1401]: E1002 19:33:25.072897 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:33:25.689022 kubelet[1401]: E1002 19:33:25.688972 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:25.903721 kubelet[1401]: I1002 19:33:25.903696 1401 scope.go:115] "RemoveContainer" containerID="6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462" Oct 2 19:33:25.903900 kubelet[1401]: I1002 19:33:25.903886 1401 scope.go:115] "RemoveContainer" containerID="6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462" Oct 2 19:33:25.904716 env[1101]: time="2023-10-02T19:33:25.904691152Z" level=info msg="RemoveContainer for \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\"" Oct 2 19:33:25.905066 env[1101]: time="2023-10-02T19:33:25.904835692Z" level=info msg="RemoveContainer for \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\"" Oct 2 19:33:25.905141 env[1101]: time="2023-10-02T19:33:25.905111930Z" level=error msg="RemoveContainer for \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\" failed" error="failed to set removing state for container \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\": container is already in removing state" Oct 2 19:33:25.905305 kubelet[1401]: E1002 19:33:25.905288 1401 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\": container is already in removing state" containerID="6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462" Oct 2 19:33:25.905363 kubelet[1401]: E1002 19:33:25.905324 1401 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462": container is already in removing state; Skipping pod "cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)" Oct 2 19:33:25.905401 kubelet[1401]: E1002 19:33:25.905393 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:25.905599 kubelet[1401]: E1002 19:33:25.905588 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:33:25.913155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced-rootfs.mount: Deactivated successfully. Oct 2 19:33:25.946130 env[1101]: time="2023-10-02T19:33:25.946038148Z" level=info msg="RemoveContainer for \"6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462\" returns successfully" Oct 2 19:33:26.689181 kubelet[1401]: E1002 19:33:26.689093 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:26.907047 kubelet[1401]: E1002 19:33:26.906995 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:26.907215 kubelet[1401]: E1002 19:33:26.907200 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:33:27.413890 kubelet[1401]: W1002 19:33:27.413818 1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34c082fd_1c33_496d_be5a_a58a734e36df.slice/cri-containerd-6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462.scope WatchSource:0}: container "6bea920d431a867be39ecda48be6d61dab4fee1a4dad6d26c5203e440b725462" in namespace "k8s.io": not found Oct 2 19:33:27.690019 kubelet[1401]: E1002 19:33:27.689921 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:28.690211 kubelet[1401]: E1002 19:33:28.690156 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:29.690894 kubelet[1401]: E1002 19:33:29.690850 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:30.520173 kubelet[1401]: W1002 19:33:30.520128 1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34c082fd_1c33_496d_be5a_a58a734e36df.slice/cri-containerd-6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced.scope WatchSource:0}: task 6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced not found: not found Oct 2 19:33:30.691287 kubelet[1401]: E1002 19:33:30.691238 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:31.691902 kubelet[1401]: E1002 19:33:31.691855 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:32.692580 kubelet[1401]: E1002 19:33:32.692532 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:33.692878 kubelet[1401]: E1002 19:33:33.692839 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:34.693403 kubelet[1401]: E1002 19:33:34.693372 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:35.694354 kubelet[1401]: E1002 19:33:35.694323 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:36.694631 kubelet[1401]: E1002 19:33:36.694579 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:37.695591 kubelet[1401]: E1002 19:33:37.695539 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:38.695927 kubelet[1401]: E1002 19:33:38.695898 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:38.891023 update_engine[1086]: I1002 19:33:38.890973 1086 update_attempter.cc:505] Updating boot flags... Oct 2 19:33:39.696666 kubelet[1401]: E1002 19:33:39.696600 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:40.697043 kubelet[1401]: E1002 19:33:40.697008 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:41.697915 kubelet[1401]: E1002 19:33:41.697834 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:41.849187 kubelet[1401]: E1002 19:33:41.849043 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:41.851467 env[1101]: time="2023-10-02T19:33:41.851419722Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:33:41.862216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389303151.mount: Deactivated successfully. Oct 2 19:33:41.869377 env[1101]: time="2023-10-02T19:33:41.869295635Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\"" Oct 2 19:33:41.870104 env[1101]: time="2023-10-02T19:33:41.870062755Z" level=info msg="StartContainer for \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\"" Oct 2 19:33:41.886405 systemd[1]: Started cri-containerd-14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c.scope. Oct 2 19:33:41.897403 systemd[1]: cri-containerd-14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c.scope: Deactivated successfully. Oct 2 19:33:41.907005 env[1101]: time="2023-10-02T19:33:41.906920145Z" level=info msg="shim disconnected" id=14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c Oct 2 19:33:41.907005 env[1101]: time="2023-10-02T19:33:41.906994828Z" level=warning msg="cleaning up after shim disconnected" id=14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c namespace=k8s.io Oct 2 19:33:41.907005 env[1101]: time="2023-10-02T19:33:41.907012384Z" level=info msg="cleaning up dead shim" Oct 2 19:33:41.915038 env[1101]: time="2023-10-02T19:33:41.914964953Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1800 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:33:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:33:41.915385 env[1101]: time="2023-10-02T19:33:41.915297703Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:33:41.918717 env[1101]: time="2023-10-02T19:33:41.918638487Z" level=error msg="Failed to pipe stderr of container \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\"" error="reading from a closed fifo" Oct 2 19:33:41.918717 env[1101]: time="2023-10-02T19:33:41.918652034Z" level=error msg="Failed to pipe stdout of container \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\"" error="reading from a closed fifo" Oct 2 19:33:41.921187 env[1101]: time="2023-10-02T19:33:41.921123276Z" level=error msg="StartContainer for \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:33:41.921476 kubelet[1401]: E1002 19:33:41.921442 1401 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c" Oct 2 19:33:41.921598 kubelet[1401]: E1002 19:33:41.921579 1401 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:33:41.921598 kubelet[1401]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:33:41.921598 kubelet[1401]: rm /hostbin/cilium-mount Oct 2 19:33:41.921598 kubelet[1401]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6kl42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:33:41.921790 kubelet[1401]: E1002 19:33:41.921614 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:33:41.936067 kubelet[1401]: I1002 19:33:41.936033 1401 scope.go:115] "RemoveContainer" containerID="6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced" Oct 2 19:33:41.936529 kubelet[1401]: I1002 19:33:41.936475 1401 scope.go:115] "RemoveContainer" containerID="6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced" Oct 2 19:33:41.937080 env[1101]: time="2023-10-02T19:33:41.937047561Z" level=info msg="RemoveContainer for \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\"" Oct 2 19:33:41.937653 env[1101]: time="2023-10-02T19:33:41.937615674Z" level=info msg="RemoveContainer for \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\"" Oct 2 19:33:41.937736 env[1101]: time="2023-10-02T19:33:41.937678704Z" level=error msg="RemoveContainer for \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\" failed" error="failed to set removing state for container \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\": container is already in removing state" Oct 2 19:33:41.937846 kubelet[1401]: E1002 19:33:41.937824 1401 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\": container is already in removing state" containerID="6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced" Oct 2 19:33:41.937913 kubelet[1401]: E1002 19:33:41.937868 1401 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced": container is already in removing state; Skipping pod "cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)" Oct 2 19:33:41.937983 kubelet[1401]: E1002 19:33:41.937967 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:41.938248 kubelet[1401]: E1002 19:33:41.938221 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:33:41.939889 env[1101]: time="2023-10-02T19:33:41.939861204Z" level=info msg="RemoveContainer for \"6a9fbb486e34d4861967a17fe16c88d640223690a0697f529bf9df9e74daaced\" returns successfully" Oct 2 19:33:42.698185 kubelet[1401]: E1002 19:33:42.698151 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:42.859605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c-rootfs.mount: Deactivated successfully. Oct 2 19:33:43.698904 kubelet[1401]: E1002 19:33:43.698841 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:44.674850 kubelet[1401]: E1002 19:33:44.674787 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:44.699071 kubelet[1401]: E1002 19:33:44.699020 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:45.013397 kubelet[1401]: W1002 19:33:45.013131 1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34c082fd_1c33_496d_be5a_a58a734e36df.slice/cri-containerd-14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c.scope WatchSource:0}: task 14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c not found: not found Oct 2 19:33:45.699610 kubelet[1401]: E1002 19:33:45.699554 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:46.699964 kubelet[1401]: E1002 19:33:46.699868 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:47.700236 kubelet[1401]: E1002 19:33:47.700180 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:48.700742 kubelet[1401]: E1002 19:33:48.700713 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:49.701226 kubelet[1401]: E1002 19:33:49.701161 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:50.701539 kubelet[1401]: E1002 19:33:50.701490 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:51.702633 kubelet[1401]: E1002 19:33:51.702506 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:52.703085 kubelet[1401]: E1002 19:33:52.703028 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:53.703985 kubelet[1401]: E1002 19:33:53.703945 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:54.704215 kubelet[1401]: E1002 19:33:54.704174 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:54.849027 kubelet[1401]: E1002 19:33:54.848999 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:54.849216 kubelet[1401]: E1002 19:33:54.849180 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:33:55.704653 kubelet[1401]: E1002 19:33:55.704598 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:56.704786 kubelet[1401]: E1002 19:33:56.704751 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:57.705882 kubelet[1401]: E1002 19:33:57.705818 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:58.706585 kubelet[1401]: E1002 19:33:58.706560 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:59.706947 kubelet[1401]: E1002 19:33:59.706900 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:00.707164 kubelet[1401]: E1002 19:34:00.707121 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:01.707626 kubelet[1401]: E1002 19:34:01.707584 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:02.708200 kubelet[1401]: E1002 19:34:02.708164 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:03.708861 kubelet[1401]: E1002 19:34:03.708829 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:04.675045 kubelet[1401]: E1002 19:34:04.675002 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:04.709491 kubelet[1401]: E1002 19:34:04.709452 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:05.710275 kubelet[1401]: E1002 19:34:05.710212 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:06.710837 kubelet[1401]: E1002 19:34:06.710793 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:06.849902 kubelet[1401]: E1002 19:34:06.849869 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:06.851737 env[1101]: time="2023-10-02T19:34:06.851705596Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:34:06.861962 env[1101]: time="2023-10-02T19:34:06.861931504Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\"" Oct 2 19:34:06.862311 env[1101]: time="2023-10-02T19:34:06.862275133Z" level=info msg="StartContainer for \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\"" Oct 2 19:34:06.876675 systemd[1]: Started cri-containerd-d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84.scope. Oct 2 19:34:06.884031 systemd[1]: cri-containerd-d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84.scope: Deactivated successfully. Oct 2 19:34:06.884301 systemd[1]: Stopped cri-containerd-d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84.scope. Oct 2 19:34:06.887679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84-rootfs.mount: Deactivated successfully. Oct 2 19:34:06.894690 env[1101]: time="2023-10-02T19:34:06.894638434Z" level=info msg="shim disconnected" id=d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84 Oct 2 19:34:06.894804 env[1101]: time="2023-10-02T19:34:06.894701136Z" level=warning msg="cleaning up after shim disconnected" id=d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84 namespace=k8s.io Oct 2 19:34:06.894804 env[1101]: time="2023-10-02T19:34:06.894715674Z" level=info msg="cleaning up dead shim" Oct 2 19:34:06.902091 env[1101]: time="2023-10-02T19:34:06.902046127Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1840 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:06.902302 env[1101]: time="2023-10-02T19:34:06.902260985Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:34:06.902542 env[1101]: time="2023-10-02T19:34:06.902467207Z" level=error msg="Failed to pipe stderr of container \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\"" error="reading from a closed fifo" Oct 2 19:34:06.902542 env[1101]: time="2023-10-02T19:34:06.902489790Z" level=error msg="Failed to pipe stdout of container \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\"" error="reading from a closed fifo" Oct 2 19:34:06.904761 env[1101]: time="2023-10-02T19:34:06.904728336Z" level=error msg="StartContainer for \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:06.904982 kubelet[1401]: E1002 19:34:06.904953 1401 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84" Oct 2 19:34:06.905079 kubelet[1401]: E1002 19:34:06.905064 1401 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:06.905079 kubelet[1401]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:06.905079 kubelet[1401]: rm /hostbin/cilium-mount Oct 2 19:34:06.905079 kubelet[1401]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6kl42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:06.905203 kubelet[1401]: E1002 19:34:06.905102 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:34:06.970410 kubelet[1401]: I1002 19:34:06.970329 1401 scope.go:115] "RemoveContainer" containerID="14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c" Oct 2 19:34:06.970693 kubelet[1401]: I1002 19:34:06.970626 1401 scope.go:115] "RemoveContainer" containerID="14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c" Oct 2 19:34:06.971661 env[1101]: time="2023-10-02T19:34:06.971633517Z" level=info msg="RemoveContainer for \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\"" Oct 2 19:34:06.971978 env[1101]: time="2023-10-02T19:34:06.971951295Z" level=info msg="RemoveContainer for \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\"" Oct 2 19:34:06.972150 env[1101]: time="2023-10-02T19:34:06.972111567Z" level=error msg="RemoveContainer for \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\" failed" error="failed to set removing state for container \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\": container is already in removing state" Oct 2 19:34:06.972256 kubelet[1401]: E1002 19:34:06.972245 1401 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\": container is already in removing state" containerID="14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c" Oct 2 19:34:06.972302 kubelet[1401]: E1002 19:34:06.972269 1401 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c": container is already in removing state; Skipping pod "cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)" Oct 2 19:34:06.972326 kubelet[1401]: E1002 19:34:06.972318 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:06.972505 kubelet[1401]: E1002 19:34:06.972496 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:34:06.974020 env[1101]: time="2023-10-02T19:34:06.974000893Z" level=info msg="RemoveContainer for \"14d0ccf659cd2adc103bd5cfb0666c9ea0b696f774a7e233e9f516b5555ef02c\" returns successfully" Oct 2 19:34:07.711549 kubelet[1401]: E1002 19:34:07.711477 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:08.712352 kubelet[1401]: E1002 19:34:08.712285 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:09.713423 kubelet[1401]: E1002 19:34:09.713387 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:09.999769 kubelet[1401]: W1002 19:34:09.999662 1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34c082fd_1c33_496d_be5a_a58a734e36df.slice/cri-containerd-d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84.scope WatchSource:0}: task d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84 not found: not found Oct 2 19:34:10.713969 kubelet[1401]: E1002 19:34:10.713934 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:11.714703 kubelet[1401]: E1002 19:34:11.714665 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:12.715424 kubelet[1401]: E1002 19:34:12.715377 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:13.715487 kubelet[1401]: E1002 19:34:13.715446 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:14.716612 kubelet[1401]: E1002 19:34:14.716566 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:15.717087 kubelet[1401]: E1002 19:34:15.717048 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:16.718192 kubelet[1401]: E1002 19:34:16.718147 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:17.719301 kubelet[1401]: E1002 19:34:17.719257 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:18.720084 kubelet[1401]: E1002 19:34:18.720042 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:19.721059 kubelet[1401]: E1002 19:34:19.720980 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:20.721664 kubelet[1401]: E1002 19:34:20.721615 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:21.722201 kubelet[1401]: E1002 19:34:21.722133 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:22.722951 kubelet[1401]: E1002 19:34:22.722910 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:22.849002 kubelet[1401]: E1002 19:34:22.848976 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:22.849638 kubelet[1401]: E1002 19:34:22.849620 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:34:23.723840 kubelet[1401]: E1002 19:34:23.723800 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:24.674675 kubelet[1401]: E1002 19:34:24.674640 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:24.724904 kubelet[1401]: E1002 19:34:24.724869 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:25.725485 kubelet[1401]: E1002 19:34:25.725452 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:26.726289 kubelet[1401]: E1002 19:34:26.726224 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:27.726914 kubelet[1401]: E1002 19:34:27.726886 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:28.728176 kubelet[1401]: E1002 19:34:28.728105 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:29.729146 kubelet[1401]: E1002 19:34:29.729072 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:30.729742 kubelet[1401]: E1002 19:34:30.729698 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:30.849678 kubelet[1401]: E1002 19:34:30.849651 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:31.730053 kubelet[1401]: E1002 19:34:31.729995 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:32.730890 kubelet[1401]: E1002 19:34:32.730849 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:33.731324 kubelet[1401]: E1002 19:34:33.731254 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:34.732193 kubelet[1401]: E1002 19:34:34.732131 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:35.732865 kubelet[1401]: E1002 19:34:35.732815 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:36.733933 kubelet[1401]: E1002 19:34:36.733886 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:36.849289 kubelet[1401]: E1002 19:34:36.849227 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:36.849579 kubelet[1401]: E1002 19:34:36.849554 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:34:37.734739 kubelet[1401]: E1002 19:34:37.734659 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:38.735649 kubelet[1401]: E1002 19:34:38.735591 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:39.736311 kubelet[1401]: E1002 19:34:39.736247 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:40.736570 kubelet[1401]: E1002 19:34:40.736489 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:41.737349 kubelet[1401]: E1002 19:34:41.737319 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:42.737849 kubelet[1401]: E1002 19:34:42.737814 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:43.738556 kubelet[1401]: E1002 19:34:43.738417 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:44.674691 kubelet[1401]: E1002 19:34:44.674647 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:44.739149 kubelet[1401]: E1002 19:34:44.739109 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:45.739253 kubelet[1401]: E1002 19:34:45.739196 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:46.740196 kubelet[1401]: E1002 19:34:46.740140 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:47.741156 kubelet[1401]: E1002 19:34:47.741103 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:48.742312 kubelet[1401]: E1002 19:34:48.742264 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:49.743003 kubelet[1401]: E1002 19:34:49.742963 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:50.743848 kubelet[1401]: E1002 19:34:50.743778 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:50.848955 kubelet[1401]: E1002 19:34:50.848916 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:50.851060 env[1101]: time="2023-10-02T19:34:50.851006973Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:34:50.867007 env[1101]: time="2023-10-02T19:34:50.866953177Z" level=info msg="CreateContainer within sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc\"" Oct 2 19:34:50.867409 env[1101]: time="2023-10-02T19:34:50.867380037Z" level=info msg="StartContainer for \"d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc\"" Oct 2 19:34:50.890062 systemd[1]: Started cri-containerd-d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc.scope. Oct 2 19:34:50.901865 systemd[1]: cri-containerd-d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc.scope: Deactivated successfully. Oct 2 19:34:50.930165 env[1101]: time="2023-10-02T19:34:50.930106454Z" level=info msg="shim disconnected" id=d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc Oct 2 19:34:50.930165 env[1101]: time="2023-10-02T19:34:50.930161129Z" level=warning msg="cleaning up after shim disconnected" id=d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc namespace=k8s.io Oct 2 19:34:50.930314 env[1101]: time="2023-10-02T19:34:50.930169725Z" level=info msg="cleaning up dead shim" Oct 2 19:34:50.936198 env[1101]: time="2023-10-02T19:34:50.936167087Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1880 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:50.936411 env[1101]: time="2023-10-02T19:34:50.936368099Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:34:50.936633 env[1101]: time="2023-10-02T19:34:50.936564010Z" level=error msg="Failed to pipe stdout of container \"d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc\"" error="reading from a closed fifo" Oct 2 19:34:50.937067 env[1101]: time="2023-10-02T19:34:50.937029294Z" level=error msg="Failed to pipe stderr of container \"d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc\"" error="reading from a closed fifo" Oct 2 19:34:50.940954 env[1101]: time="2023-10-02T19:34:50.940912443Z" level=error msg="StartContainer for \"d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:50.941166 kubelet[1401]: E1002 19:34:50.941143 1401 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc" Oct 2 19:34:50.941295 kubelet[1401]: E1002 19:34:50.941255 1401 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:50.941295 kubelet[1401]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:50.941295 kubelet[1401]: rm /hostbin/cilium-mount Oct 2 19:34:50.941295 kubelet[1401]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6kl42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:50.941295 kubelet[1401]: E1002 19:34:50.941309 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:34:51.034906 kubelet[1401]: I1002 19:34:51.034880 1401 scope.go:115] "RemoveContainer" containerID="d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84" Oct 2 19:34:51.035154 kubelet[1401]: I1002 19:34:51.035134 1401 scope.go:115] "RemoveContainer" containerID="d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84" Oct 2 19:34:51.036175 env[1101]: time="2023-10-02T19:34:51.036130823Z" level=info msg="RemoveContainer for \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\"" Oct 2 19:34:51.036451 env[1101]: time="2023-10-02T19:34:51.036423950Z" level=info msg="RemoveContainer for \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\"" Oct 2 19:34:51.036656 env[1101]: time="2023-10-02T19:34:51.036613890Z" level=error msg="RemoveContainer for \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\" failed" error="failed to set removing state for container \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\": container is already in removing state" Oct 2 19:34:51.036825 kubelet[1401]: E1002 19:34:51.036804 1401 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\": container is already in removing state" containerID="d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84" Oct 2 19:34:51.036876 kubelet[1401]: E1002 19:34:51.036842 1401 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84": container is already in removing state; Skipping pod "cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)" Oct 2 19:34:51.036928 kubelet[1401]: E1002 19:34:51.036915 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:51.037186 kubelet[1401]: E1002 19:34:51.037171 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:34:51.038985 env[1101]: time="2023-10-02T19:34:51.038955145Z" level=info msg="RemoveContainer for \"d97e660c54652ac3aaf42b04479fee6533e685c2d70934ff980164286927cd84\" returns successfully" Oct 2 19:34:51.744054 kubelet[1401]: E1002 19:34:51.744010 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:51.863010 systemd[1]: run-containerd-runc-k8s.io-d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc-runc.fFS4nA.mount: Deactivated successfully. Oct 2 19:34:51.863102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc-rootfs.mount: Deactivated successfully. Oct 2 19:34:52.744233 kubelet[1401]: E1002 19:34:52.744165 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:53.745130 kubelet[1401]: E1002 19:34:53.745085 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:54.035814 kubelet[1401]: W1002 19:34:54.035738 1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34c082fd_1c33_496d_be5a_a58a734e36df.slice/cri-containerd-d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc.scope WatchSource:0}: task d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc not found: not found Oct 2 19:34:54.746117 kubelet[1401]: E1002 19:34:54.746063 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:55.746656 kubelet[1401]: E1002 19:34:55.746619 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:56.746932 kubelet[1401]: E1002 19:34:56.746863 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:57.747832 kubelet[1401]: E1002 19:34:57.747614 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:58.748290 kubelet[1401]: E1002 19:34:58.748226 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:59.749219 kubelet[1401]: E1002 19:34:59.749165 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:00.749876 kubelet[1401]: E1002 19:35:00.749814 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:01.750389 kubelet[1401]: E1002 19:35:01.750327 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:02.751506 kubelet[1401]: E1002 19:35:02.751443 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:03.752496 kubelet[1401]: E1002 19:35:03.752435 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:04.674215 kubelet[1401]: E1002 19:35:04.674166 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:04.698040 kubelet[1401]: E1002 19:35:04.698009 1401 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:35:04.753422 kubelet[1401]: E1002 19:35:04.753376 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:04.849322 kubelet[1401]: E1002 19:35:04.849291 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:04.849507 kubelet[1401]: E1002 19:35:04.849487 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:35:05.121624 kubelet[1401]: E1002 19:35:05.121598 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:05.754413 kubelet[1401]: E1002 19:35:05.754344 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:06.755295 kubelet[1401]: E1002 19:35:06.755233 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:07.756453 kubelet[1401]: E1002 19:35:07.756384 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:08.756837 kubelet[1401]: E1002 19:35:08.756782 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:09.757867 kubelet[1401]: E1002 19:35:09.757806 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:10.123208 kubelet[1401]: E1002 19:35:10.123141 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:10.758227 kubelet[1401]: E1002 19:35:10.758160 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:11.758691 kubelet[1401]: E1002 19:35:11.758624 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:12.759068 kubelet[1401]: E1002 19:35:12.759004 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:13.759676 kubelet[1401]: E1002 19:35:13.759598 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:14.760526 kubelet[1401]: E1002 19:35:14.760456 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:15.123932 kubelet[1401]: E1002 19:35:15.123902 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:15.760771 kubelet[1401]: E1002 19:35:15.760713 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:16.761838 kubelet[1401]: E1002 19:35:16.761784 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:17.762070 kubelet[1401]: E1002 19:35:17.761981 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:18.763091 kubelet[1401]: E1002 19:35:18.763033 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:19.763735 kubelet[1401]: E1002 19:35:19.763642 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:19.849726 kubelet[1401]: E1002 19:35:19.849691 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:19.849922 kubelet[1401]: E1002 19:35:19.849900 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:35:20.124387 kubelet[1401]: E1002 19:35:20.124347 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:20.764754 kubelet[1401]: E1002 19:35:20.764691 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:21.764894 kubelet[1401]: E1002 19:35:21.764840 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:22.765351 kubelet[1401]: E1002 19:35:22.765297 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:23.766307 kubelet[1401]: E1002 19:35:23.766229 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:24.674895 kubelet[1401]: E1002 19:35:24.674854 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:24.767426 kubelet[1401]: E1002 19:35:24.767365 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:25.124841 kubelet[1401]: E1002 19:35:25.124814 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:25.768347 kubelet[1401]: E1002 19:35:25.768280 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:26.769168 kubelet[1401]: E1002 19:35:26.769099 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:27.769272 kubelet[1401]: E1002 19:35:27.769215 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:28.769951 kubelet[1401]: E1002 19:35:28.769890 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:29.770325 kubelet[1401]: E1002 19:35:29.770244 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:30.126021 kubelet[1401]: E1002 19:35:30.125980 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:30.770619 kubelet[1401]: E1002 19:35:30.770576 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:30.849739 kubelet[1401]: E1002 19:35:30.849694 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:30.849958 kubelet[1401]: E1002 19:35:30.849932 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:35:31.771418 kubelet[1401]: E1002 19:35:31.771366 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:32.772637 kubelet[1401]: E1002 19:35:32.772528 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:33.773420 kubelet[1401]: E1002 19:35:33.773355 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:34.774120 kubelet[1401]: E1002 19:35:34.774051 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:35.127084 kubelet[1401]: E1002 19:35:35.127054 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:35.774226 kubelet[1401]: E1002 19:35:35.774154 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:36.774981 kubelet[1401]: E1002 19:35:36.774926 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:37.775476 kubelet[1401]: E1002 19:35:37.775434 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:38.775848 kubelet[1401]: E1002 19:35:38.775787 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:39.776866 kubelet[1401]: E1002 19:35:39.776806 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:40.128202 kubelet[1401]: E1002 19:35:40.128175 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:40.777338 kubelet[1401]: E1002 19:35:40.777266 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:41.778146 kubelet[1401]: E1002 19:35:41.778082 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:41.849150 kubelet[1401]: E1002 19:35:41.849105 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:41.849328 kubelet[1401]: E1002 19:35:41.849314 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:35:42.778746 kubelet[1401]: E1002 19:35:42.778650 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:43.779060 kubelet[1401]: E1002 19:35:43.778989 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:44.674676 kubelet[1401]: E1002 19:35:44.674623 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:44.779165 kubelet[1401]: E1002 19:35:44.779100 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:44.849171 kubelet[1401]: E1002 19:35:44.849137 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:45.128555 kubelet[1401]: E1002 19:35:45.128527 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:45.780303 kubelet[1401]: E1002 19:35:45.780236 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:46.781082 kubelet[1401]: E1002 19:35:46.781024 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:47.781640 kubelet[1401]: E1002 19:35:47.781495 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:48.782390 kubelet[1401]: E1002 19:35:48.782326 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:49.783371 kubelet[1401]: E1002 19:35:49.783267 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:50.129840 kubelet[1401]: E1002 19:35:50.129743 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:50.784071 kubelet[1401]: E1002 19:35:50.784008 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:51.784681 kubelet[1401]: E1002 19:35:51.784629 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:52.785324 kubelet[1401]: E1002 19:35:52.785247 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:52.849286 kubelet[1401]: E1002 19:35:52.849237 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:52.849500 kubelet[1401]: E1002 19:35:52.849485 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:35:53.786305 kubelet[1401]: E1002 19:35:53.786247 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:54.786631 kubelet[1401]: E1002 19:35:54.786570 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:55.130627 kubelet[1401]: E1002 19:35:55.130530 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:55.787545 kubelet[1401]: E1002 19:35:55.787467 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:56.788099 kubelet[1401]: E1002 19:35:56.788036 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:57.788676 kubelet[1401]: E1002 19:35:57.788619 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:58.789564 kubelet[1401]: E1002 19:35:58.789488 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:59.790058 kubelet[1401]: E1002 19:35:59.789987 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:00.131944 kubelet[1401]: E1002 19:36:00.131825 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:00.790844 kubelet[1401]: E1002 19:36:00.790773 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:01.790999 kubelet[1401]: E1002 19:36:01.790930 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:02.791901 kubelet[1401]: E1002 19:36:02.791821 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:03.792038 kubelet[1401]: E1002 19:36:03.791967 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:04.674247 kubelet[1401]: E1002 19:36:04.674204 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:04.793123 kubelet[1401]: E1002 19:36:04.793024 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:04.849148 kubelet[1401]: E1002 19:36:04.849098 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:04.849375 kubelet[1401]: E1002 19:36:04.849304 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-55wwm_kube-system(34c082fd-1c33-496d-be5a-a58a734e36df)\"" pod="kube-system/cilium-55wwm" podUID=34c082fd-1c33-496d-be5a-a58a734e36df Oct 2 19:36:05.133290 kubelet[1401]: E1002 19:36:05.133256 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:05.793937 kubelet[1401]: E1002 19:36:05.793860 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:06.794835 kubelet[1401]: E1002 19:36:06.794742 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:07.795568 kubelet[1401]: E1002 19:36:07.795491 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:08.796687 kubelet[1401]: E1002 19:36:08.796614 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:09.797733 kubelet[1401]: E1002 19:36:09.797667 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:10.135070 kubelet[1401]: E1002 19:36:10.134947 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:10.798719 kubelet[1401]: E1002 19:36:10.798584 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:11.799018 kubelet[1401]: E1002 19:36:11.798955 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:12.799153 kubelet[1401]: E1002 19:36:12.799087 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:13.224771 env[1101]: time="2023-10-02T19:36:13.224024315Z" level=info msg="StopPodSandbox for \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\"" Oct 2 19:36:13.224771 env[1101]: time="2023-10-02T19:36:13.224135938Z" level=info msg="Container to stop \"d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:36:13.226887 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156-shm.mount: Deactivated successfully. Oct 2 19:36:13.236271 systemd[1]: cri-containerd-3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156.scope: Deactivated successfully. Oct 2 19:36:13.238246 kernel: kauditd_printk_skb: 302 callbacks suppressed Oct 2 19:36:13.238357 kernel: audit: type=1334 audit(1696275373.235:642): prog-id=68 op=UNLOAD Oct 2 19:36:13.235000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:36:13.241000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:36:13.245545 kernel: audit: type=1334 audit(1696275373.241:643): prog-id=71 op=UNLOAD Oct 2 19:36:13.256583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156-rootfs.mount: Deactivated successfully. Oct 2 19:36:13.290844 env[1101]: time="2023-10-02T19:36:13.290780632Z" level=info msg="shim disconnected" id=3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156 Oct 2 19:36:13.290844 env[1101]: time="2023-10-02T19:36:13.290834635Z" level=warning msg="cleaning up after shim disconnected" id=3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156 namespace=k8s.io Oct 2 19:36:13.290844 env[1101]: time="2023-10-02T19:36:13.290846699Z" level=info msg="cleaning up dead shim" Oct 2 19:36:13.299319 env[1101]: time="2023-10-02T19:36:13.299193540Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:36:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1921 runtime=io.containerd.runc.v2\n" Oct 2 19:36:13.299675 env[1101]: time="2023-10-02T19:36:13.299636899Z" level=info msg="TearDown network for sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" successfully" Oct 2 19:36:13.299731 env[1101]: time="2023-10-02T19:36:13.299667567Z" level=info msg="StopPodSandbox for \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" returns successfully" Oct 2 19:36:13.389884 kubelet[1401]: I1002 19:36:13.389827 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34c082fd-1c33-496d-be5a-a58a734e36df-hubble-tls\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.389884 kubelet[1401]: I1002 19:36:13.389890 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-host-proc-sys-net\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.389926 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-xtables-lock\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.389960 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-config-path\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.389984 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-lib-modules\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.390003 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-run\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.390023 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-hostproc\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.390042 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cni-path\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.390066 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34c082fd-1c33-496d-be5a-a58a734e36df-clustermesh-secrets\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.390087 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-host-proc-sys-kernel\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.390106 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-cgroup\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.390135 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-etc-cni-netd\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390171 kubelet[1401]: I1002 19:36:13.390156 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-bpf-maps\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390574 kubelet[1401]: I1002 19:36:13.390180 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kl42\" (UniqueName: \"kubernetes.io/projected/34c082fd-1c33-496d-be5a-a58a734e36df-kube-api-access-6kl42\") pod \"34c082fd-1c33-496d-be5a-a58a734e36df\" (UID: \"34c082fd-1c33-496d-be5a-a58a734e36df\") " Oct 2 19:36:13.390574 kubelet[1401]: I1002 19:36:13.390555 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-hostproc" (OuterVolumeSpecName: "hostproc") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.390917 kubelet[1401]: I1002 19:36:13.390896 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cni-path" (OuterVolumeSpecName: "cni-path") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.391153 kubelet[1401]: I1002 19:36:13.391135 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.391297 kubelet[1401]: I1002 19:36:13.391164 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.391378 kubelet[1401]: I1002 19:36:13.391352 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.391439 kubelet[1401]: I1002 19:36:13.391395 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.391439 kubelet[1401]: I1002 19:36:13.391417 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.391492 kubelet[1401]: I1002 19:36:13.391437 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.391541 kubelet[1401]: I1002 19:36:13.391496 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.391541 kubelet[1401]: I1002 19:36:13.391536 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:13.392708 kubelet[1401]: W1002 19:36:13.392615 1401 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/34c082fd-1c33-496d-be5a-a58a734e36df/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:36:13.393618 kubelet[1401]: I1002 19:36:13.393588 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c082fd-1c33-496d-be5a-a58a734e36df-kube-api-access-6kl42" (OuterVolumeSpecName: "kube-api-access-6kl42") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "kube-api-access-6kl42". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:36:13.393836 kubelet[1401]: I1002 19:36:13.393813 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c082fd-1c33-496d-be5a-a58a734e36df-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:36:13.394646 kubelet[1401]: I1002 19:36:13.394610 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:36:13.395057 systemd[1]: var-lib-kubelet-pods-34c082fd\x2d1c33\x2d496d\x2dbe5a\x2da58a734e36df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6kl42.mount: Deactivated successfully. Oct 2 19:36:13.395170 systemd[1]: var-lib-kubelet-pods-34c082fd\x2d1c33\x2d496d\x2dbe5a\x2da58a734e36df-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:36:13.396033 kubelet[1401]: I1002 19:36:13.395965 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c082fd-1c33-496d-be5a-a58a734e36df-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "34c082fd-1c33-496d-be5a-a58a734e36df" (UID: "34c082fd-1c33-496d-be5a-a58a734e36df"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:36:13.396899 systemd[1]: var-lib-kubelet-pods-34c082fd\x2d1c33\x2d496d\x2dbe5a\x2da58a734e36df-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490664 1401 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-lib-modules\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490720 1401 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-run\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490736 1401 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34c082fd-1c33-496d-be5a-a58a734e36df-clustermesh-secrets\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490749 1401 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-hostproc\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490761 1401 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cni-path\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490778 1401 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-etc-cni-netd\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490790 1401 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-bpf-maps\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490804 1401 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6kl42\" (UniqueName: \"kubernetes.io/projected/34c082fd-1c33-496d-be5a-a58a734e36df-kube-api-access-6kl42\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490816 1401 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-host-proc-sys-kernel\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490827 1401 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-cgroup\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490839 1401 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34c082fd-1c33-496d-be5a-a58a734e36df-cilium-config-path\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490852 1401 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34c082fd-1c33-496d-be5a-a58a734e36df-hubble-tls\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490875 1401 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-host-proc-sys-net\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.491031 kubelet[1401]: I1002 19:36:13.490895 1401 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34c082fd-1c33-496d-be5a-a58a734e36df-xtables-lock\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:36:13.800030 kubelet[1401]: E1002 19:36:13.799852 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:14.167963 kubelet[1401]: I1002 19:36:14.167710 1401 scope.go:115] "RemoveContainer" containerID="d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc" Oct 2 19:36:14.172828 env[1101]: time="2023-10-02T19:36:14.172780867Z" level=info msg="RemoveContainer for \"d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc\"" Oct 2 19:36:14.179486 systemd[1]: Removed slice kubepods-burstable-pod34c082fd_1c33_496d_be5a_a58a734e36df.slice. Oct 2 19:36:14.188729 env[1101]: time="2023-10-02T19:36:14.188650283Z" level=info msg="RemoveContainer for \"d9279849330bdd85dcae016e3911f44d19d2b7cc5b603a43c9d396b82e1d7dbc\" returns successfully" Oct 2 19:36:14.801058 kubelet[1401]: E1002 19:36:14.800989 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:14.850715 kubelet[1401]: I1002 19:36:14.850669 1401 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=34c082fd-1c33-496d-be5a-a58a734e36df path="/var/lib/kubelet/pods/34c082fd-1c33-496d-be5a-a58a734e36df/volumes" Oct 2 19:36:15.135572 kubelet[1401]: E1002 19:36:15.135456 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:15.802040 kubelet[1401]: E1002 19:36:15.801981 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:16.062006 kubelet[1401]: I1002 19:36:16.060226 1401 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:36:16.062006 kubelet[1401]: E1002 19:36:16.060356 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.062006 kubelet[1401]: E1002 19:36:16.060372 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.062006 kubelet[1401]: E1002 19:36:16.060382 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.062006 kubelet[1401]: I1002 19:36:16.060432 1401 memory_manager.go:346] "RemoveStaleState removing state" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.062006 kubelet[1401]: I1002 19:36:16.060442 1401 memory_manager.go:346] "RemoveStaleState removing state" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.062006 kubelet[1401]: I1002 19:36:16.060452 1401 memory_manager.go:346] "RemoveStaleState removing state" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.062006 kubelet[1401]: E1002 19:36:16.060469 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.062006 kubelet[1401]: E1002 19:36:16.060477 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.062006 kubelet[1401]: I1002 19:36:16.060489 1401 memory_manager.go:346] "RemoveStaleState removing state" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.062006 kubelet[1401]: I1002 19:36:16.060495 1401 memory_manager.go:346] "RemoveStaleState removing state" podUID="34c082fd-1c33-496d-be5a-a58a734e36df" containerName="mount-cgroup" Oct 2 19:36:16.064275 kubelet[1401]: I1002 19:36:16.064255 1401 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:36:16.071747 systemd[1]: Created slice kubepods-burstable-pod52251430_6f84_4e98_aa5c_cffaeffd860f.slice. Oct 2 19:36:16.091486 systemd[1]: Created slice kubepods-besteffort-podfd9aa446_4503_4d92_9a91_3dfafddb4b49.slice. Oct 2 19:36:16.213281 kubelet[1401]: I1002 19:36:16.213182 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-host-proc-sys-net\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213281 kubelet[1401]: I1002 19:36:16.213243 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd9aa446-4503-4d92-9a91-3dfafddb4b49-cilium-config-path\") pod \"cilium-operator-574c4bb98d-n96jc\" (UID: \"fd9aa446-4503-4d92-9a91-3dfafddb4b49\") " pod="kube-system/cilium-operator-574c4bb98d-n96jc" Oct 2 19:36:16.213281 kubelet[1401]: I1002 19:36:16.213262 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkjrh\" (UniqueName: \"kubernetes.io/projected/fd9aa446-4503-4d92-9a91-3dfafddb4b49-kube-api-access-tkjrh\") pod \"cilium-operator-574c4bb98d-n96jc\" (UID: \"fd9aa446-4503-4d92-9a91-3dfafddb4b49\") " pod="kube-system/cilium-operator-574c4bb98d-n96jc" Oct 2 19:36:16.213281 kubelet[1401]: I1002 19:36:16.213281 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-hostproc\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213281 kubelet[1401]: I1002 19:36:16.213298 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-config-path\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213339 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-ipsec-secrets\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213406 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cni-path\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213465 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-xtables-lock\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213501 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52251430-6f84-4e98-aa5c-cffaeffd860f-hubble-tls\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213548 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-bpf-maps\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213589 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-etc-cni-netd\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213620 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52251430-6f84-4e98-aa5c-cffaeffd860f-clustermesh-secrets\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213650 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xw2\" (UniqueName: \"kubernetes.io/projected/52251430-6f84-4e98-aa5c-cffaeffd860f-kube-api-access-h4xw2\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213714 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-run\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213749 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-cgroup\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213775 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-lib-modules\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.213908 kubelet[1401]: I1002 19:36:16.213820 1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-host-proc-sys-kernel\") pod \"cilium-5x6nt\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " pod="kube-system/cilium-5x6nt" Oct 2 19:36:16.399498 kubelet[1401]: E1002 19:36:16.399400 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:16.404946 env[1101]: time="2023-10-02T19:36:16.400903629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-n96jc,Uid:fd9aa446-4503-4d92-9a91-3dfafddb4b49,Namespace:kube-system,Attempt:0,}" Oct 2 19:36:16.429294 env[1101]: time="2023-10-02T19:36:16.429050115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:36:16.429294 env[1101]: time="2023-10-02T19:36:16.429111102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:36:16.429294 env[1101]: time="2023-10-02T19:36:16.429125780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:36:16.429742 env[1101]: time="2023-10-02T19:36:16.429686053Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f pid=1948 runtime=io.containerd.runc.v2 Oct 2 19:36:16.442458 systemd[1]: Started cri-containerd-7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f.scope. Oct 2 19:36:16.521000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.521000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.542222 kernel: audit: type=1400 audit(1696275376.521:644): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.542319 kernel: audit: type=1400 audit(1696275376.521:645): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.542356 kernel: audit: type=1400 audit(1696275376.521:646): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.521000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.544569 kernel: audit: type=1400 audit(1696275376.521:647): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.521000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.521000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.549105 kernel: audit: type=1400 audit(1696275376.521:648): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.549171 kernel: audit: type=1400 audit(1696275376.521:649): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.521000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.551478 kernel: audit: type=1400 audit(1696275376.521:650): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.521000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.521000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.556162 kernel: audit: type=1400 audit(1696275376.521:651): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.521000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit: BPF prog-id=75 op=LOAD Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1948 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313165373736646162346366363731336339396561636637323364 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1948 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313165373736646162346366363731336339396561636637323364 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.538000 audit: BPF prog-id=76 op=LOAD Oct 2 19:36:16.538000 audit[1957]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0003a1b90 items=0 ppid=1948 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313165373736646162346366363731336339396561636637323364 Oct 2 19:36:16.540000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.540000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.540000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.540000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.540000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.540000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.540000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.540000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.540000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.540000 audit: BPF prog-id=77 op=LOAD Oct 2 19:36:16.540000 audit[1957]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0003a1bd8 items=0 ppid=1948 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313165373736646162346366363731336339396561636637323364 Oct 2 19:36:16.543000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:36:16.543000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { perfmon } for pid=1957 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit[1957]: AVC avc: denied { bpf } for pid=1957 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.543000 audit: BPF prog-id=78 op=LOAD Oct 2 19:36:16.543000 audit[1957]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003a1fe8 items=0 ppid=1948 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313165373736646162346366363731336339396561636637323364 Oct 2 19:36:16.585045 env[1101]: time="2023-10-02T19:36:16.584998317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-n96jc,Uid:fd9aa446-4503-4d92-9a91-3dfafddb4b49,Namespace:kube-system,Attempt:0,} returns sandbox id \"7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f\"" Oct 2 19:36:16.586244 kubelet[1401]: E1002 19:36:16.585942 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:16.587133 env[1101]: time="2023-10-02T19:36:16.587108127Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:36:16.688975 kubelet[1401]: E1002 19:36:16.688832 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:16.689547 env[1101]: time="2023-10-02T19:36:16.689442027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5x6nt,Uid:52251430-6f84-4e98-aa5c-cffaeffd860f,Namespace:kube-system,Attempt:0,}" Oct 2 19:36:16.794188 env[1101]: time="2023-10-02T19:36:16.793526028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:36:16.794188 env[1101]: time="2023-10-02T19:36:16.793571395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:36:16.794188 env[1101]: time="2023-10-02T19:36:16.793584520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:36:16.794188 env[1101]: time="2023-10-02T19:36:16.793726061Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8 pid=1988 runtime=io.containerd.runc.v2 Oct 2 19:36:16.802715 kubelet[1401]: E1002 19:36:16.802678 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:16.811922 systemd[1]: Started cri-containerd-5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8.scope. Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit: BPF prog-id=79 op=LOAD Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000117c48 a2=10 a3=1c items=0 ppid=1988 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663638343131306466643032666339373937353437646261303762 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001176b0 a2=3c a3=c items=0 ppid=1988 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663638343131306466643032666339373937353437646261303762 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.820000 audit: BPF prog-id=80 op=LOAD Oct 2 19:36:16.820000 audit[1997]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001179d8 a2=78 a3=c00020a5a0 items=0 ppid=1988 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663638343131306466643032666339373937353437646261303762 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit: BPF prog-id=81 op=LOAD Oct 2 19:36:16.821000 audit[1997]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000117770 a2=78 a3=c00020a5e8 items=0 ppid=1988 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663638343131306466643032666339373937353437646261303762 Oct 2 19:36:16.821000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:36:16.821000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { perfmon } for pid=1997 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit[1997]: AVC avc: denied { bpf } for pid=1997 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:16.821000 audit: BPF prog-id=82 op=LOAD Oct 2 19:36:16.821000 audit[1997]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000117c30 a2=78 a3=c00020a9f8 items=0 ppid=1988 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:16.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663638343131306466643032666339373937353437646261303762 Oct 2 19:36:16.834647 env[1101]: time="2023-10-02T19:36:16.834595258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5x6nt,Uid:52251430-6f84-4e98-aa5c-cffaeffd860f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\"" Oct 2 19:36:16.835526 kubelet[1401]: E1002 19:36:16.835490 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:16.839335 env[1101]: time="2023-10-02T19:36:16.839280078Z" level=info msg="CreateContainer within sandbox \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:36:16.867456 env[1101]: time="2023-10-02T19:36:16.867379834Z" level=info msg="CreateContainer within sandbox \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\"" Oct 2 19:36:16.868189 env[1101]: time="2023-10-02T19:36:16.868133297Z" level=info msg="StartContainer for \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\"" Oct 2 19:36:16.886398 systemd[1]: Started cri-containerd-ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f.scope. Oct 2 19:36:16.899081 systemd[1]: cri-containerd-ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f.scope: Deactivated successfully. Oct 2 19:36:16.923429 env[1101]: time="2023-10-02T19:36:16.923353332Z" level=info msg="shim disconnected" id=ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f Oct 2 19:36:16.923429 env[1101]: time="2023-10-02T19:36:16.923426723Z" level=warning msg="cleaning up after shim disconnected" id=ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f namespace=k8s.io Oct 2 19:36:16.923429 env[1101]: time="2023-10-02T19:36:16.923441933Z" level=info msg="cleaning up dead shim" Oct 2 19:36:16.955643 env[1101]: time="2023-10-02T19:36:16.955475300Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:36:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2047 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:36:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:36:16.955907 env[1101]: time="2023-10-02T19:36:16.955808588Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:36:16.958362 env[1101]: time="2023-10-02T19:36:16.958266586Z" level=error msg="Failed to pipe stderr of container \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\"" error="reading from a closed fifo" Oct 2 19:36:16.958778 env[1101]: time="2023-10-02T19:36:16.958726055Z" level=error msg="Failed to pipe stdout of container \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\"" error="reading from a closed fifo" Oct 2 19:36:16.964763 env[1101]: time="2023-10-02T19:36:16.964684584Z" level=error msg="StartContainer for \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:36:16.965118 kubelet[1401]: E1002 19:36:16.965075 1401 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f" Oct 2 19:36:16.965313 kubelet[1401]: E1002 19:36:16.965226 1401 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:36:16.965313 kubelet[1401]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:36:16.965313 kubelet[1401]: rm /hostbin/cilium-mount Oct 2 19:36:16.965313 kubelet[1401]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h4xw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:36:16.965313 kubelet[1401]: E1002 19:36:16.965292 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5x6nt" podUID=52251430-6f84-4e98-aa5c-cffaeffd860f Oct 2 19:36:17.176536 kubelet[1401]: E1002 19:36:17.176491 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:17.178382 env[1101]: time="2023-10-02T19:36:17.178310434Z" level=info msg="CreateContainer within sandbox \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:36:17.307105 env[1101]: time="2023-10-02T19:36:17.307013140Z" level=info msg="CreateContainer within sandbox \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\"" Oct 2 19:36:17.307678 env[1101]: time="2023-10-02T19:36:17.307630141Z" level=info msg="StartContainer for \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\"" Oct 2 19:36:17.329483 systemd[1]: Started cri-containerd-be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14.scope. Oct 2 19:36:17.339914 systemd[1]: cri-containerd-be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14.scope: Deactivated successfully. Oct 2 19:36:17.340257 systemd[1]: Stopped cri-containerd-be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14.scope. Oct 2 19:36:17.343703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14-rootfs.mount: Deactivated successfully. Oct 2 19:36:17.646375 env[1101]: time="2023-10-02T19:36:17.646223586Z" level=info msg="shim disconnected" id=be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14 Oct 2 19:36:17.646375 env[1101]: time="2023-10-02T19:36:17.646283772Z" level=warning msg="cleaning up after shim disconnected" id=be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14 namespace=k8s.io Oct 2 19:36:17.646375 env[1101]: time="2023-10-02T19:36:17.646295454Z" level=info msg="cleaning up dead shim" Oct 2 19:36:17.654209 env[1101]: time="2023-10-02T19:36:17.654142601Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:36:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2082 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:36:17Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:36:17.654535 env[1101]: time="2023-10-02T19:36:17.654462483Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 19:36:17.654787 env[1101]: time="2023-10-02T19:36:17.654725878Z" level=error msg="Failed to pipe stdout of container \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\"" error="reading from a closed fifo" Oct 2 19:36:17.654876 env[1101]: time="2023-10-02T19:36:17.654763910Z" level=error msg="Failed to pipe stderr of container \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\"" error="reading from a closed fifo" Oct 2 19:36:17.803260 kubelet[1401]: E1002 19:36:17.803194 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:17.822068 env[1101]: time="2023-10-02T19:36:17.821872116Z" level=error msg="StartContainer for \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:36:17.822248 kubelet[1401]: E1002 19:36:17.822218 1401 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14" Oct 2 19:36:17.822407 kubelet[1401]: E1002 19:36:17.822373 1401 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:36:17.822407 kubelet[1401]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:36:17.822407 kubelet[1401]: rm /hostbin/cilium-mount Oct 2 19:36:17.822407 kubelet[1401]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h4xw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:36:17.822608 kubelet[1401]: E1002 19:36:17.822418 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5x6nt" podUID=52251430-6f84-4e98-aa5c-cffaeffd860f Oct 2 19:36:18.180673 kubelet[1401]: I1002 19:36:18.180640 1401 scope.go:115] "RemoveContainer" containerID="ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f" Oct 2 19:36:18.180984 kubelet[1401]: I1002 19:36:18.180967 1401 scope.go:115] "RemoveContainer" containerID="ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f" Oct 2 19:36:18.181861 env[1101]: time="2023-10-02T19:36:18.181829397Z" level=info msg="RemoveContainer for \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\"" Oct 2 19:36:18.182283 env[1101]: time="2023-10-02T19:36:18.182254170Z" level=info msg="RemoveContainer for \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\"" Oct 2 19:36:18.182441 env[1101]: time="2023-10-02T19:36:18.182318333Z" level=error msg="RemoveContainer for \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\" failed" error="failed to set removing state for container \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\": container is already in removing state" Oct 2 19:36:18.182493 kubelet[1401]: E1002 19:36:18.182471 1401 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\": container is already in removing state" containerID="ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f" Oct 2 19:36:18.182568 kubelet[1401]: E1002 19:36:18.182533 1401 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f": container is already in removing state; Skipping pod "cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f)" Oct 2 19:36:18.182643 kubelet[1401]: E1002 19:36:18.182628 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:18.182963 kubelet[1401]: E1002 19:36:18.182948 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f)\"" pod="kube-system/cilium-5x6nt" podUID=52251430-6f84-4e98-aa5c-cffaeffd860f Oct 2 19:36:18.726497 env[1101]: time="2023-10-02T19:36:18.726418222Z" level=info msg="RemoveContainer for \"ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f\" returns successfully" Oct 2 19:36:18.803423 kubelet[1401]: E1002 19:36:18.803326 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:19.804446 kubelet[1401]: E1002 19:36:19.804380 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:20.029434 kubelet[1401]: W1002 19:36:20.029382 1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52251430_6f84_4e98_aa5c_cffaeffd860f.slice/cri-containerd-ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f.scope WatchSource:0}: container "ebbeb47867a24f25a55de0aaf46f46209f20c8c4443894b340239b32fcf97d0f" in namespace "k8s.io": not found Oct 2 19:36:20.136557 kubelet[1401]: E1002 19:36:20.136455 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:20.771339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362408677.mount: Deactivated successfully. Oct 2 19:36:20.804763 kubelet[1401]: E1002 19:36:20.804706 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:21.805063 kubelet[1401]: E1002 19:36:21.804996 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:22.805934 kubelet[1401]: E1002 19:36:22.805867 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:23.136111 kubelet[1401]: W1002 19:36:23.136004 1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52251430_6f84_4e98_aa5c_cffaeffd860f.slice/cri-containerd-be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14.scope WatchSource:0}: task be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14 not found: not found Oct 2 19:36:23.806276 kubelet[1401]: E1002 19:36:23.806206 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:24.631847 env[1101]: time="2023-10-02T19:36:24.631658019Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:36:24.636225 env[1101]: time="2023-10-02T19:36:24.636185489Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:36:24.639571 env[1101]: time="2023-10-02T19:36:24.639523411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:36:24.640008 env[1101]: time="2023-10-02T19:36:24.639948646Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 2 19:36:24.642111 env[1101]: time="2023-10-02T19:36:24.642069337Z" level=info msg="CreateContainer within sandbox \"7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:36:24.675013 kubelet[1401]: E1002 19:36:24.674933 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:24.696542 env[1101]: time="2023-10-02T19:36:24.696453077Z" level=info msg="CreateContainer within sandbox \"7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\"" Oct 2 19:36:24.697197 env[1101]: time="2023-10-02T19:36:24.697157746Z" level=info msg="StartContainer for \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\"" Oct 2 19:36:24.713013 systemd[1]: run-containerd-runc-k8s.io-c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9-runc.hFcfqg.mount: Deactivated successfully. Oct 2 19:36:24.716852 systemd[1]: Started cri-containerd-c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9.scope. Oct 2 19:36:24.724000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.739111 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:36:24.739277 kernel: audit: type=1400 audit(1696275384.724:680): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.724000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.743751 kernel: audit: type=1400 audit(1696275384.724:681): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.746050 kernel: audit: type=1400 audit(1696275384.725:682): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.725000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.748371 kernel: audit: type=1400 audit(1696275384.725:683): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.725000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.725000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.750542 kernel: audit: type=1400 audit(1696275384.725:684): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.750603 kernel: audit: type=1400 audit(1696275384.725:685): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.725000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.725000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.754352 kernel: audit: type=1400 audit(1696275384.725:686): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.754400 kernel: audit: type=1400 audit(1696275384.725:687): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.725000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.725000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.758248 kernel: audit: type=1400 audit(1696275384.725:688): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.758300 kernel: audit: type=1400 audit(1696275384.740:689): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.740000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.740000 audit: BPF prog-id=83 op=LOAD Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1948 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:24.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613139346662633132356662383734633433363634316633633832 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1948 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:24.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613139346662633132356662383734633433363634316633633832 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.742000 audit: BPF prog-id=84 op=LOAD Oct 2 19:36:24.742000 audit[2102]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0003c00a0 items=0 ppid=1948 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:24.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613139346662633132356662383734633433363634316633633832 Oct 2 19:36:24.744000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.744000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.744000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.744000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.744000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.744000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.744000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.744000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.744000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.744000 audit: BPF prog-id=85 op=LOAD Oct 2 19:36:24.744000 audit[2102]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0003c00e8 items=0 ppid=1948 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:24.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613139346662633132356662383734633433363634316633633832 Oct 2 19:36:24.747000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:36:24.747000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { perfmon } for pid=2102 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit[2102]: AVC avc: denied { bpf } for pid=2102 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:36:24.747000 audit: BPF prog-id=86 op=LOAD Oct 2 19:36:24.747000 audit[2102]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003c04f8 items=0 ppid=1948 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:36:24.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330613139346662633132356662383734633433363634316633633832 Oct 2 19:36:24.786000 audit[2112]: AVC avc: denied { map_create } for pid=2112 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c588,c930 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c588,c930 tclass=bpf permissive=0 Oct 2 19:36:24.786000 audit[2112]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0006d37d0 a2=48 a3=c0006d37c0 items=0 ppid=1948 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c588,c930 key=(null) Oct 2 19:36:24.786000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:36:24.806862 kubelet[1401]: E1002 19:36:24.806819 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:24.841202 env[1101]: time="2023-10-02T19:36:24.841128100Z" level=info msg="StartContainer for \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\" returns successfully" Oct 2 19:36:25.137379 kubelet[1401]: E1002 19:36:25.137330 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:25.196242 kubelet[1401]: E1002 19:36:25.196198 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:25.206122 kubelet[1401]: I1002 19:36:25.206082 1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-n96jc" podStartSLOduration=1.1525526130000001 podCreationTimestamp="2023-10-02 19:36:16 +0000 UTC" firstStartedPulling="2023-10-02 19:36:16.586818322 +0000 UTC m=+192.163876229" lastFinishedPulling="2023-10-02 19:36:24.640280401 +0000 UTC m=+200.217338308" observedRunningTime="2023-10-02 19:36:25.205743792 +0000 UTC m=+200.782801700" watchObservedRunningTime="2023-10-02 19:36:25.206014692 +0000 UTC m=+200.783072589" Oct 2 19:36:25.808045 kubelet[1401]: E1002 19:36:25.807976 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:26.198159 kubelet[1401]: E1002 19:36:26.198054 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:26.808720 kubelet[1401]: E1002 19:36:26.808658 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:27.808994 kubelet[1401]: E1002 19:36:27.808918 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:28.809877 kubelet[1401]: E1002 19:36:28.809797 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:29.810154 kubelet[1401]: E1002 19:36:29.810071 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:30.138318 kubelet[1401]: E1002 19:36:30.138207 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:30.810927 kubelet[1401]: E1002 19:36:30.810873 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:31.811422 kubelet[1401]: E1002 19:36:31.811356 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:32.811614 kubelet[1401]: E1002 19:36:32.811559 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:33.812158 kubelet[1401]: E1002 19:36:33.812069 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:33.849007 kubelet[1401]: E1002 19:36:33.848972 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:33.851417 env[1101]: time="2023-10-02T19:36:33.851353087Z" level=info msg="CreateContainer within sandbox \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:36:33.874160 env[1101]: time="2023-10-02T19:36:33.874092020Z" level=info msg="CreateContainer within sandbox \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\"" Oct 2 19:36:33.874790 env[1101]: time="2023-10-02T19:36:33.874754669Z" level=info msg="StartContainer for \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\"" Oct 2 19:36:33.890286 systemd[1]: Started cri-containerd-dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c.scope. Oct 2 19:36:33.913924 systemd[1]: cri-containerd-dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c.scope: Deactivated successfully. Oct 2 19:36:34.135526 env[1101]: time="2023-10-02T19:36:34.135371902Z" level=info msg="shim disconnected" id=dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c Oct 2 19:36:34.135526 env[1101]: time="2023-10-02T19:36:34.135431355Z" level=warning msg="cleaning up after shim disconnected" id=dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c namespace=k8s.io Oct 2 19:36:34.135526 env[1101]: time="2023-10-02T19:36:34.135446425Z" level=info msg="cleaning up dead shim" Oct 2 19:36:34.143473 env[1101]: time="2023-10-02T19:36:34.143410879Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:36:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2156 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:36:34Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:36:34.143791 env[1101]: time="2023-10-02T19:36:34.143700463Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:36:34.144041 env[1101]: time="2023-10-02T19:36:34.143985750Z" level=error msg="Failed to pipe stdout of container \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\"" error="reading from a closed fifo" Oct 2 19:36:34.146833 env[1101]: time="2023-10-02T19:36:34.146781385Z" level=error msg="Failed to pipe stderr of container \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\"" error="reading from a closed fifo" Oct 2 19:36:34.149356 env[1101]: time="2023-10-02T19:36:34.149318895Z" level=error msg="StartContainer for \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:36:34.149635 kubelet[1401]: E1002 19:36:34.149598 1401 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c" Oct 2 19:36:34.149778 kubelet[1401]: E1002 19:36:34.149720 1401 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:36:34.149778 kubelet[1401]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:36:34.149778 kubelet[1401]: rm /hostbin/cilium-mount Oct 2 19:36:34.149778 kubelet[1401]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h4xw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:36:34.149778 kubelet[1401]: E1002 19:36:34.149753 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5x6nt" podUID=52251430-6f84-4e98-aa5c-cffaeffd860f Oct 2 19:36:34.213740 kubelet[1401]: I1002 19:36:34.213704 1401 scope.go:115] "RemoveContainer" containerID="be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14" Oct 2 19:36:34.214071 kubelet[1401]: I1002 19:36:34.214023 1401 scope.go:115] "RemoveContainer" containerID="be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14" Oct 2 19:36:34.214826 env[1101]: time="2023-10-02T19:36:34.214758924Z" level=info msg="RemoveContainer for \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\"" Oct 2 19:36:34.215050 env[1101]: time="2023-10-02T19:36:34.214978615Z" level=info msg="RemoveContainer for \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\"" Oct 2 19:36:34.215110 env[1101]: time="2023-10-02T19:36:34.215078807Z" level=error msg="RemoveContainer for \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\" failed" error="failed to set removing state for container \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\": container is already in removing state" Oct 2 19:36:34.215225 kubelet[1401]: E1002 19:36:34.215211 1401 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\": container is already in removing state" containerID="be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14" Oct 2 19:36:34.215280 kubelet[1401]: E1002 19:36:34.215246 1401 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14": container is already in removing state; Skipping pod "cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f)" Oct 2 19:36:34.215318 kubelet[1401]: E1002 19:36:34.215299 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:34.215521 kubelet[1401]: E1002 19:36:34.215477 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f)\"" pod="kube-system/cilium-5x6nt" podUID=52251430-6f84-4e98-aa5c-cffaeffd860f Oct 2 19:36:34.220486 env[1101]: time="2023-10-02T19:36:34.220439394Z" level=info msg="RemoveContainer for \"be30a626f8d6c26df8364bbd38664f80fb80e426eac3f39d284d703a4e57dd14\" returns successfully" Oct 2 19:36:34.813169 kubelet[1401]: E1002 19:36:34.813106 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:34.868465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c-rootfs.mount: Deactivated successfully. Oct 2 19:36:35.139379 kubelet[1401]: E1002 19:36:35.139280 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:35.814328 kubelet[1401]: E1002 19:36:35.814263 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:36.815093 kubelet[1401]: E1002 19:36:36.814995 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:37.240081 kubelet[1401]: W1002 19:36:37.239974 1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52251430_6f84_4e98_aa5c_cffaeffd860f.slice/cri-containerd-dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c.scope WatchSource:0}: task dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c not found: not found Oct 2 19:36:37.815206 kubelet[1401]: E1002 19:36:37.815131 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:38.815662 kubelet[1401]: E1002 19:36:38.815596 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:39.816298 kubelet[1401]: E1002 19:36:39.816239 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:40.140358 kubelet[1401]: E1002 19:36:40.140261 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:40.816763 kubelet[1401]: E1002 19:36:40.816694 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:41.817793 kubelet[1401]: E1002 19:36:41.817725 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:42.818617 kubelet[1401]: E1002 19:36:42.818533 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:43.818987 kubelet[1401]: E1002 19:36:43.818924 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:44.674506 kubelet[1401]: E1002 19:36:44.674441 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:44.819414 kubelet[1401]: E1002 19:36:44.819355 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:45.140848 kubelet[1401]: E1002 19:36:45.140815 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:45.820024 kubelet[1401]: E1002 19:36:45.819962 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:45.849863 kubelet[1401]: E1002 19:36:45.849826 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:45.850056 kubelet[1401]: E1002 19:36:45.850041 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f)\"" pod="kube-system/cilium-5x6nt" podUID=52251430-6f84-4e98-aa5c-cffaeffd860f Oct 2 19:36:46.820901 kubelet[1401]: E1002 19:36:46.820835 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:47.821886 kubelet[1401]: E1002 19:36:47.821815 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:47.850026 kubelet[1401]: E1002 19:36:47.849945 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:48.822796 kubelet[1401]: E1002 19:36:48.822736 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:49.823085 kubelet[1401]: E1002 19:36:49.823010 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:50.141750 kubelet[1401]: E1002 19:36:50.141628 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:50.823863 kubelet[1401]: E1002 19:36:50.823805 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:51.824029 kubelet[1401]: E1002 19:36:51.823967 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:52.824772 kubelet[1401]: E1002 19:36:52.824702 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:53.825439 kubelet[1401]: E1002 19:36:53.825343 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:54.825712 kubelet[1401]: E1002 19:36:54.825595 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:55.149773 kubelet[1401]: E1002 19:36:55.149156 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:55.826025 kubelet[1401]: E1002 19:36:55.825913 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:56.826672 kubelet[1401]: E1002 19:36:56.826551 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:57.831374 kubelet[1401]: E1002 19:36:57.831239 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:58.831658 kubelet[1401]: E1002 19:36:58.831471 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:59.833337 kubelet[1401]: E1002 19:36:59.832463 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:59.849346 kubelet[1401]: E1002 19:36:59.849078 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:36:59.865351 env[1101]: time="2023-10-02T19:36:59.861819290Z" level=info msg="CreateContainer within sandbox \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:37:00.045895 env[1101]: time="2023-10-02T19:37:00.043602461Z" level=info msg="CreateContainer within sandbox \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22\"" Oct 2 19:37:00.050504 env[1101]: time="2023-10-02T19:37:00.049963558Z" level=info msg="StartContainer for \"6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22\"" Oct 2 19:37:00.101141 systemd[1]: Started cri-containerd-6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22.scope. Oct 2 19:37:00.156389 kubelet[1401]: E1002 19:37:00.154520 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:37:00.220573 systemd[1]: cri-containerd-6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22.scope: Deactivated successfully. Oct 2 19:37:00.235653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22-rootfs.mount: Deactivated successfully. Oct 2 19:37:00.491765 env[1101]: time="2023-10-02T19:37:00.491370675Z" level=info msg="shim disconnected" id=6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22 Oct 2 19:37:00.492100 env[1101]: time="2023-10-02T19:37:00.492068977Z" level=warning msg="cleaning up after shim disconnected" id=6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22 namespace=k8s.io Oct 2 19:37:00.492191 env[1101]: time="2023-10-02T19:37:00.492168566Z" level=info msg="cleaning up dead shim" Oct 2 19:37:00.513363 env[1101]: time="2023-10-02T19:37:00.508097212Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:37:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2195 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:37:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:37:00.513363 env[1101]: time="2023-10-02T19:37:00.508426957Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:37:00.521127 env[1101]: time="2023-10-02T19:37:00.521014907Z" level=error msg="Failed to pipe stderr of container \"6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22\"" error="reading from a closed fifo" Oct 2 19:37:00.521751 env[1101]: time="2023-10-02T19:37:00.521647615Z" level=error msg="Failed to pipe stdout of container \"6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22\"" error="reading from a closed fifo" Oct 2 19:37:00.586362 env[1101]: time="2023-10-02T19:37:00.585678200Z" level=error msg="StartContainer for \"6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:37:00.586638 kubelet[1401]: E1002 19:37:00.586143 1401 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22" Oct 2 19:37:00.586638 kubelet[1401]: E1002 19:37:00.586296 1401 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:37:00.586638 kubelet[1401]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:37:00.586638 kubelet[1401]: rm /hostbin/cilium-mount Oct 2 19:37:00.586638 kubelet[1401]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h4xw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:37:00.586638 kubelet[1401]: E1002 19:37:00.586343 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5x6nt" podUID=52251430-6f84-4e98-aa5c-cffaeffd860f Oct 2 19:37:00.833662 kubelet[1401]: E1002 19:37:00.833557 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:01.307624 kubelet[1401]: I1002 19:37:01.307169 1401 scope.go:115] "RemoveContainer" containerID="dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c" Oct 2 19:37:01.307853 kubelet[1401]: I1002 19:37:01.307640 1401 scope.go:115] "RemoveContainer" containerID="dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c" Oct 2 19:37:01.314157 env[1101]: time="2023-10-02T19:37:01.312687308Z" level=info msg="RemoveContainer for \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\"" Oct 2 19:37:01.314157 env[1101]: time="2023-10-02T19:37:01.313863786Z" level=info msg="RemoveContainer for \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\"" Oct 2 19:37:01.314157 env[1101]: time="2023-10-02T19:37:01.314007027Z" level=error msg="RemoveContainer for \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\" failed" error="failed to set removing state for container \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\": container is already in removing state" Oct 2 19:37:01.314670 kubelet[1401]: E1002 19:37:01.314256 1401 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\": container is already in removing state" containerID="dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c" Oct 2 19:37:01.314670 kubelet[1401]: E1002 19:37:01.314317 1401 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c": container is already in removing state; Skipping pod "cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f)" Oct 2 19:37:01.314670 kubelet[1401]: E1002 19:37:01.314409 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:37:01.314809 kubelet[1401]: E1002 19:37:01.314743 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f)\"" pod="kube-system/cilium-5x6nt" podUID=52251430-6f84-4e98-aa5c-cffaeffd860f Oct 2 19:37:01.399406 env[1101]: time="2023-10-02T19:37:01.396636880Z" level=info msg="RemoveContainer for \"dc8d0459d65eb1351799a3572db3e98cd8293d40ab1654004e3fbc862370c73c\" returns successfully" Oct 2 19:37:01.834652 kubelet[1401]: E1002 19:37:01.834594 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:02.835767 kubelet[1401]: E1002 19:37:02.835588 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:03.629895 kubelet[1401]: W1002 19:37:03.625149 1401 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52251430_6f84_4e98_aa5c_cffaeffd860f.slice/cri-containerd-6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22.scope WatchSource:0}: task 6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22 not found: not found Oct 2 19:37:03.838939 kubelet[1401]: E1002 19:37:03.838689 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:04.674547 kubelet[1401]: E1002 19:37:04.674463 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:04.689563 env[1101]: time="2023-10-02T19:37:04.689090685Z" level=info msg="StopPodSandbox for \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\"" Oct 2 19:37:04.689563 env[1101]: time="2023-10-02T19:37:04.689204051Z" level=info msg="TearDown network for sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" successfully" Oct 2 19:37:04.689563 env[1101]: time="2023-10-02T19:37:04.689261900Z" level=info msg="StopPodSandbox for \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" returns successfully" Oct 2 19:37:04.691573 env[1101]: time="2023-10-02T19:37:04.690388846Z" level=info msg="RemovePodSandbox for \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\"" Oct 2 19:37:04.691573 env[1101]: time="2023-10-02T19:37:04.690424323Z" level=info msg="Forcibly stopping sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\"" Oct 2 19:37:04.691573 env[1101]: time="2023-10-02T19:37:04.690496590Z" level=info msg="TearDown network for sandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" successfully" Oct 2 19:37:04.831146 env[1101]: time="2023-10-02T19:37:04.829591163Z" level=info msg="RemovePodSandbox \"3e1eee70a144b8527a36d5388ade1875c15a8ac7d7b4f1d0dfd8bba61c6bc156\" returns successfully" Oct 2 19:37:04.842543 kubelet[1401]: E1002 19:37:04.842429 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:05.162307 kubelet[1401]: E1002 19:37:05.159762 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:37:05.846312 kubelet[1401]: E1002 19:37:05.843506 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:06.846537 kubelet[1401]: E1002 19:37:06.846437 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:07.849823 kubelet[1401]: E1002 19:37:07.847564 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:08.852479 kubelet[1401]: E1002 19:37:08.852354 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:09.853467 kubelet[1401]: E1002 19:37:09.853350 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:10.178938 kubelet[1401]: E1002 19:37:10.178548 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:37:10.853569 kubelet[1401]: E1002 19:37:10.853533 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:11.855750 kubelet[1401]: E1002 19:37:11.855551 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:12.858108 kubelet[1401]: E1002 19:37:12.858059 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:13.859763 kubelet[1401]: E1002 19:37:13.859664 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:14.851261 kubelet[1401]: E1002 19:37:14.850071 1401 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:37:14.851261 kubelet[1401]: E1002 19:37:14.850497 1401 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-5x6nt_kube-system(52251430-6f84-4e98-aa5c-cffaeffd860f)\"" pod="kube-system/cilium-5x6nt" podUID=52251430-6f84-4e98-aa5c-cffaeffd860f Oct 2 19:37:14.863472 kubelet[1401]: E1002 19:37:14.863373 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:15.191549 kubelet[1401]: E1002 19:37:15.181379 1401 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:37:15.868268 kubelet[1401]: E1002 19:37:15.866326 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:16.323145 env[1101]: time="2023-10-02T19:37:16.322863744Z" level=info msg="StopPodSandbox for \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\"" Oct 2 19:37:16.323145 env[1101]: time="2023-10-02T19:37:16.323024007Z" level=info msg="Container to stop \"6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:37:16.325954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8-shm.mount: Deactivated successfully. Oct 2 19:37:16.347964 systemd[1]: cri-containerd-5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8.scope: Deactivated successfully. Oct 2 19:37:16.356000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:37:16.367585 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:37:16.367805 kernel: audit: type=1334 audit(1696275436.356:699): prog-id=79 op=UNLOAD Oct 2 19:37:16.370000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:37:16.377414 kernel: audit: type=1334 audit(1696275436.370:700): prog-id=82 op=UNLOAD Oct 2 19:37:16.411900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8-rootfs.mount: Deactivated successfully. Oct 2 19:37:16.465483 env[1101]: time="2023-10-02T19:37:16.465343445Z" level=info msg="StopContainer for \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\" with timeout 30 (s)" Oct 2 19:37:16.466640 env[1101]: time="2023-10-02T19:37:16.466605561Z" level=info msg="Stop container \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\" with signal terminated" Oct 2 19:37:16.494944 env[1101]: time="2023-10-02T19:37:16.493523292Z" level=info msg="shim disconnected" id=5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8 Oct 2 19:37:16.494944 env[1101]: time="2023-10-02T19:37:16.493599687Z" level=warning msg="cleaning up after shim disconnected" id=5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8 namespace=k8s.io Oct 2 19:37:16.494944 env[1101]: time="2023-10-02T19:37:16.493616289Z" level=info msg="cleaning up dead shim" Oct 2 19:37:16.517000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:37:16.517182 systemd[1]: cri-containerd-c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9.scope: Deactivated successfully. Oct 2 19:37:16.522738 kernel: audit: type=1334 audit(1696275436.517:701): prog-id=83 op=UNLOAD Oct 2 19:37:16.527222 env[1101]: time="2023-10-02T19:37:16.523350215Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:37:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2237 runtime=io.containerd.runc.v2\n" Oct 2 19:37:16.527222 env[1101]: time="2023-10-02T19:37:16.523772957Z" level=info msg="TearDown network for sandbox \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" successfully" Oct 2 19:37:16.527222 env[1101]: time="2023-10-02T19:37:16.523800710Z" level=info msg="StopPodSandbox for \"5ef684110dfd02fc9797547dba07b7374b272286d0f964fd017efbd2a23e1cd8\" returns successfully" Oct 2 19:37:16.529000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:37:16.535266 kernel: audit: type=1334 audit(1696275436.529:702): prog-id=86 op=UNLOAD Oct 2 19:37:16.579471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9-rootfs.mount: Deactivated successfully. Oct 2 19:37:16.659792 env[1101]: time="2023-10-02T19:37:16.658369707Z" level=info msg="shim disconnected" id=c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9 Oct 2 19:37:16.659792 env[1101]: time="2023-10-02T19:37:16.658442566Z" level=warning msg="cleaning up after shim disconnected" id=c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9 namespace=k8s.io Oct 2 19:37:16.659792 env[1101]: time="2023-10-02T19:37:16.658455730Z" level=info msg="cleaning up dead shim" Oct 2 19:37:16.678113 env[1101]: time="2023-10-02T19:37:16.678025595Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:37:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2263 runtime=io.containerd.runc.v2\n" Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695332 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-host-proc-sys-net\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695406 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-host-proc-sys-kernel\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695436 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-lib-modules\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695471 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52251430-6f84-4e98-aa5c-cffaeffd860f-clustermesh-secrets\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695496 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-cgroup\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695547 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-ipsec-secrets\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695578 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-hostproc\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695602 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-run\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695628 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-bpf-maps\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695664 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-config-path\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695693 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-xtables-lock\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695724 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4xw2\" (UniqueName: \"kubernetes.io/projected/52251430-6f84-4e98-aa5c-cffaeffd860f-kube-api-access-h4xw2\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695746 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cni-path\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695771 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52251430-6f84-4e98-aa5c-cffaeffd860f-hubble-tls\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695804 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-etc-cni-netd\") pod \"52251430-6f84-4e98-aa5c-cffaeffd860f\" (UID: \"52251430-6f84-4e98-aa5c-cffaeffd860f\") " Oct 2 19:37:16.698255 kubelet[1401]: I1002 19:37:16.695902 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.704827 kubelet[1401]: I1002 19:37:16.695943 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.704827 kubelet[1401]: I1002 19:37:16.695963 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.704827 kubelet[1401]: I1002 19:37:16.695990 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.700423 systemd[1]: var-lib-kubelet-pods-52251430\x2d6f84\x2d4e98\x2daa5c\x2dcffaeffd860f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:37:16.708260 kubelet[1401]: I1002 19:37:16.706895 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.708260 kubelet[1401]: I1002 19:37:16.706978 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.708260 kubelet[1401]: I1002 19:37:16.707781 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.708260 kubelet[1401]: I1002 19:37:16.707817 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cni-path" (OuterVolumeSpecName: "cni-path") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.715773 env[1101]: time="2023-10-02T19:37:16.711090313Z" level=info msg="StopContainer for \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\" returns successfully" Oct 2 19:37:16.714769 systemd[1]: var-lib-kubelet-pods-52251430\x2d6f84\x2d4e98\x2daa5c\x2dcffaeffd860f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:37:16.716035 kubelet[1401]: I1002 19:37:16.707848 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-hostproc" (OuterVolumeSpecName: "hostproc") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.716035 kubelet[1401]: I1002 19:37:16.708731 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:37:16.716035 kubelet[1401]: I1002 19:37:16.709171 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52251430-6f84-4e98-aa5c-cffaeffd860f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:37:16.716035 kubelet[1401]: W1002 19:37:16.710775 1401 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/52251430-6f84-4e98-aa5c-cffaeffd860f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:37:16.716035 kubelet[1401]: I1002 19:37:16.714485 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:37:16.716791 kubelet[1401]: I1002 19:37:16.716411 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:37:16.719468 env[1101]: time="2023-10-02T19:37:16.717079490Z" level=info msg="StopPodSandbox for \"7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f\"" Oct 2 19:37:16.719468 env[1101]: time="2023-10-02T19:37:16.717182526Z" level=info msg="Container to stop \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:37:16.722084 kubelet[1401]: I1002 19:37:16.719841 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52251430-6f84-4e98-aa5c-cffaeffd860f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:37:16.724473 kubelet[1401]: I1002 19:37:16.724421 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52251430-6f84-4e98-aa5c-cffaeffd860f-kube-api-access-h4xw2" (OuterVolumeSpecName: "kube-api-access-h4xw2") pod "52251430-6f84-4e98-aa5c-cffaeffd860f" (UID: "52251430-6f84-4e98-aa5c-cffaeffd860f"). InnerVolumeSpecName "kube-api-access-h4xw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:37:16.738466 systemd[1]: cri-containerd-7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f.scope: Deactivated successfully. Oct 2 19:37:16.738000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:37:16.746255 kernel: audit: type=1334 audit(1696275436.738:703): prog-id=75 op=UNLOAD Oct 2 19:37:16.761000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:37:16.768235 kernel: audit: type=1334 audit(1696275436.761:704): prog-id=78 op=UNLOAD Oct 2 19:37:16.796205 kubelet[1401]: I1002 19:37:16.796095 1401 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-hostproc\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796205 kubelet[1401]: I1002 19:37:16.796152 1401 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-run\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796205 kubelet[1401]: I1002 19:37:16.796178 1401 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-bpf-maps\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796205 kubelet[1401]: I1002 19:37:16.796194 1401 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52251430-6f84-4e98-aa5c-cffaeffd860f-clustermesh-secrets\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796205 kubelet[1401]: I1002 19:37:16.796207 1401 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-cgroup\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796205 kubelet[1401]: I1002 19:37:16.796221 1401 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-ipsec-secrets\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796205 kubelet[1401]: I1002 19:37:16.796235 1401 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52251430-6f84-4e98-aa5c-cffaeffd860f-cilium-config-path\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796700 kubelet[1401]: I1002 19:37:16.796256 1401 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-xtables-lock\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796700 kubelet[1401]: I1002 19:37:16.796271 1401 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h4xw2\" (UniqueName: \"kubernetes.io/projected/52251430-6f84-4e98-aa5c-cffaeffd860f-kube-api-access-h4xw2\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796700 kubelet[1401]: I1002 19:37:16.796284 1401 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52251430-6f84-4e98-aa5c-cffaeffd860f-hubble-tls\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796700 kubelet[1401]: I1002 19:37:16.796296 1401 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-etc-cni-netd\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796700 kubelet[1401]: I1002 19:37:16.796309 1401 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-cni-path\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796700 kubelet[1401]: I1002 19:37:16.796321 1401 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-host-proc-sys-kernel\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796700 kubelet[1401]: I1002 19:37:16.796332 1401 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-lib-modules\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.796700 kubelet[1401]: I1002 19:37:16.796353 1401 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52251430-6f84-4e98-aa5c-cffaeffd860f-host-proc-sys-net\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:16.862534 systemd[1]: Removed slice kubepods-burstable-pod52251430_6f84_4e98_aa5c_cffaeffd860f.slice. Oct 2 19:37:16.869049 kubelet[1401]: E1002 19:37:16.867335 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:37:16.947712 env[1101]: time="2023-10-02T19:37:16.947641616Z" level=info msg="shim disconnected" id=7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f Oct 2 19:37:16.948003 env[1101]: time="2023-10-02T19:37:16.947953307Z" level=warning msg="cleaning up after shim disconnected" id=7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f namespace=k8s.io Oct 2 19:37:16.948003 env[1101]: time="2023-10-02T19:37:16.947980810Z" level=info msg="cleaning up dead shim" Oct 2 19:37:16.961933 env[1101]: time="2023-10-02T19:37:16.961767719Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:37:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2296 runtime=io.containerd.runc.v2\n" Oct 2 19:37:16.962789 env[1101]: time="2023-10-02T19:37:16.962715146Z" level=info msg="TearDown network for sandbox \"7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f\" successfully" Oct 2 19:37:16.962876 env[1101]: time="2023-10-02T19:37:16.962791642Z" level=info msg="StopPodSandbox for \"7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f\" returns successfully" Oct 2 19:37:17.101872 kubelet[1401]: I1002 19:37:17.101776 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkjrh\" (UniqueName: \"kubernetes.io/projected/fd9aa446-4503-4d92-9a91-3dfafddb4b49-kube-api-access-tkjrh\") pod \"fd9aa446-4503-4d92-9a91-3dfafddb4b49\" (UID: \"fd9aa446-4503-4d92-9a91-3dfafddb4b49\") " Oct 2 19:37:17.101872 kubelet[1401]: I1002 19:37:17.101870 1401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd9aa446-4503-4d92-9a91-3dfafddb4b49-cilium-config-path\") pod \"fd9aa446-4503-4d92-9a91-3dfafddb4b49\" (UID: \"fd9aa446-4503-4d92-9a91-3dfafddb4b49\") " Oct 2 19:37:17.106233 kubelet[1401]: W1002 19:37:17.102203 1401 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/fd9aa446-4503-4d92-9a91-3dfafddb4b49/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:37:17.106233 kubelet[1401]: I1002 19:37:17.104818 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9aa446-4503-4d92-9a91-3dfafddb4b49-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd9aa446-4503-4d92-9a91-3dfafddb4b49" (UID: "fd9aa446-4503-4d92-9a91-3dfafddb4b49"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:37:17.113040 kubelet[1401]: I1002 19:37:17.110300 1401 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9aa446-4503-4d92-9a91-3dfafddb4b49-kube-api-access-tkjrh" (OuterVolumeSpecName: "kube-api-access-tkjrh") pod "fd9aa446-4503-4d92-9a91-3dfafddb4b49" (UID: "fd9aa446-4503-4d92-9a91-3dfafddb4b49"). InnerVolumeSpecName "kube-api-access-tkjrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:37:17.203337 kubelet[1401]: I1002 19:37:17.202378 1401 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd9aa446-4503-4d92-9a91-3dfafddb4b49-cilium-config-path\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:17.203337 kubelet[1401]: I1002 19:37:17.202443 1401 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tkjrh\" (UniqueName: \"kubernetes.io/projected/fd9aa446-4503-4d92-9a91-3dfafddb4b49-kube-api-access-tkjrh\") on node \"10.0.0.18\" DevicePath \"\"" Oct 2 19:37:17.328941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f-rootfs.mount: Deactivated successfully. Oct 2 19:37:17.329085 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7311e776dab4cf6713c99eacf723d50f11585a593a250753272e176600dfa14f-shm.mount: Deactivated successfully. Oct 2 19:37:17.329179 systemd[1]: var-lib-kubelet-pods-52251430\x2d6f84\x2d4e98\x2daa5c\x2dcffaeffd860f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh4xw2.mount: Deactivated successfully. Oct 2 19:37:17.329261 systemd[1]: var-lib-kubelet-pods-fd9aa446\x2d4503\x2d4d92\x2d9a91\x2d3dfafddb4b49-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtkjrh.mount: Deactivated successfully. Oct 2 19:37:17.329332 systemd[1]: var-lib-kubelet-pods-52251430\x2d6f84\x2d4e98\x2daa5c\x2dcffaeffd860f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:37:17.452625 kubelet[1401]: I1002 19:37:17.449016 1401 scope.go:115] "RemoveContainer" containerID="c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9" Oct 2 19:37:17.454660 systemd[1]: Removed slice kubepods-besteffort-podfd9aa446_4503_4d92_9a91_3dfafddb4b49.slice. Oct 2 19:37:17.459796 env[1101]: time="2023-10-02T19:37:17.458829067Z" level=info msg="RemoveContainer for \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\"" Oct 2 19:37:17.510199 env[1101]: time="2023-10-02T19:37:17.509658294Z" level=info msg="RemoveContainer for \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\" returns successfully" Oct 2 19:37:17.512780 kubelet[1401]: I1002 19:37:17.512698 1401 scope.go:115] "RemoveContainer" containerID="c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9" Oct 2 19:37:17.515930 env[1101]: time="2023-10-02T19:37:17.513722579Z" level=error msg="ContainerStatus for \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\": not found" Oct 2 19:37:17.516722 kubelet[1401]: E1002 19:37:17.516417 1401 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\": not found" containerID="c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9" Oct 2 19:37:17.516722 kubelet[1401]: I1002 19:37:17.516497 1401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9} err="failed to get container status \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0a194fbc125fb874c436641f3c824c8d1a4b379b22f3392184e914211afbad9\": not found" Oct 2 19:37:17.516722 kubelet[1401]: I1002 19:37:17.516536 1401 scope.go:115] "RemoveContainer" containerID="6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22" Oct 2 19:37:17.518539 env[1101]: time="2023-10-02T19:37:17.518097685Z" level=info msg="RemoveContainer for \"6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22\"" Oct 2 19:37:17.568635 env[1101]: time="2023-10-02T19:37:17.566176040Z" level=info msg="RemoveContainer for \"6abb4d96642f24b902375a0bf73c843eb7fedd031c7ad9cd5d9d3d0c3997be22\" returns successfully" Oct 2 19:37:17.872337 kubelet[1401]: E1002 19:37:17.871362 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"